From: user5857@newsgrouper.org.invalid   
      
   Stephen Fuld posted:   
      
   > On 8/29/2025 8:26 AM, MitchAlsup wrote:   
   > >   
   > > Stephen Fuld posted:   
   > >   
   > >> On 8/21/2025 1:49 PM, MitchAlsup wrote:   
   > >>>   
   > >>> Greetings everyone !   
   > >>   
   > >> snip   
   > >>   
   > >>> My 66000 ISA is in "pretty good shape" having almost no changes over   
   > >>> the last 6 months, with only specification clarifications. So it was time   
   > >>> to work on the non-ISA parts.   
   > >>   
   > >> You mention two non-ISA parts that you have been working on. I thought   
   > >> I would ask you for your thoughts on another non-ISA part. Timers and   
   > >> clocks. Doing a "clean slate" ISA frees you from being compatible with   
   > >> lots of old features that might have been the right thing to do back   
   > >> then, but aren't now.   
   > >>   
   > >> So, how many clocks/timers should a system have?   
   > >   
   > > Lots. Every major resource should have its own clock as part of its   
   > > performance counter set.   
   > >   
   > > Every interruptible resource should have its own timer which is programmed   
   > > to throw interrupts to one thing or another.   
   >   
   > So if I want to keep track of how much CPU time a task has accumulated,   
   > someone has to save the value of the CPU clock when the task gets   
   > interrupted or switched out. Is this done by the HW or by code in the   
   > task switcher?   
      
   Task switcher.   
      
   > Later, when the task gets control of the CPU again,   
   > there needs to be a mechanism to resume adding time to its saved value.   
   > How is this done?   
      
   One reads the old performance counters:   
    LDM Rd,Rd+7,[MMIO+R29]   
   and saves with Thread:   
    STM Rd,Rd+7,[Thread+128]   
      
   Side Note: LDM and STM are ATOMIC wrt the 8 doublewords loaded/stored.   
      
   To put them back:   
    LDM Rd,Rd+7,[Thread+128]   
    STM Rd,Rd+7,[MMIO+R29]   
      
   I should also note that all major resources {Interconnect, L3 Cache,   
   DAM, HostBridge, I/O MMU, PCIe Segmenter, PCIe Root Complexes} have   
   performance counters; and when there are more than one of them, each   
   has its own performance counter set (8 counters each with 1-byte   
   selectors.) Each resource defines what encoding counts what event.   
   Core currently counts 35 different events, L2 28 events, Miss Buffers   
   currently counts 25 events.   
   ------------------   
   There might be other ways to keep a clock constantly running and sample   
   it at task switch, instead of replacing it at task switch.   
      
   > What precision?   
      
   All counters and timers are 64-bits, and can run as fast as core/PCIe   
      
   > > Clocks need to be as fast as the fastest event, right now that means   
   > > 16 GHz since PCIe 5.0 and 6.0 use 16 GHz clock bases. But, realistically,   
   > > if you can count 0..16 events per ns, its fine.   
   > >   
   > >> How   
   > >> fast does the software need to be able to access them?   
   > >   
   > > 1 instruction-then the latency of actual access.   
   > > 2 instructions back-to-back to perform an ATOMIC-like read-update.   
   > > LDD Rd,[timer]   
   > > STD Rs,[timer]   
   >   
   > Good.   
   >   
   > I presume you   
   > >> need some comparitors (unless you use count down to zero).   
   > >   
   > > You can count down to zero, count up to zero, or use a comparator.   
   > > Zeroes cost less HW than comparators. Comparators also require   
   > > an additional register and an additional instruction at swap time.   
   > >   
   > >> Should the   
   > >> comparisons be one time or recurring?   
   > >   
   > > I have no opinion at this time.   
   > >   
   > >> What about syncing with an   
   > >> external timer?   
   > >   
   > > A necessity--that is what the ATOMIC-like comment above is for.   
   >   
   > Got it.   
   >   
   >   
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|