home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.arch      Apparently more than just beeps & boops      131,241 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 130,063 of 131,241   
   BGB to Thomas Koenig   
   Re: Tonights Tradeoff   
   29 Oct 25 14:02:32   
   
   From: cr88192@gmail.com   
      
   On 10/29/2025 1:15 PM, Thomas Koenig wrote:   
   > Robert Finch  schrieb:   
   >> Started working on yet another CPU – Qupls4. Fixed 40-bit instructions,   
   >> 64 GPRs. GPRs may be used in pairs for 128-bit ops. Registers are named   
   >> as if there were 32 GPRs, A0 (arg 0 register is r1) and A0H (arg 0 high   
   >> is r33). Sameo for other registers. GPRs may contain either integer or   
   >> floating-point values.   
   >   
   > I understand the temptation to go for more bits :-)  What is your   
   > instruction alignment?  Bytewise so 40 bits fit, or do you have some   
   > alignment that the first instruction of a cache line is always aligned?   
   >   
   > Having register pairs does not make the compiler writer's life easier,   
   > unfortunately.   
   >   
      
   Yeah, and from the compiler POV, would likely prefer having Even+Odd pairs.   
      
   >> Going with a bit result vector in any GPR for compares, then a branch on   
   >> bit-set/clear for conditional branches. Might also include branch true /   
   >> false.   
   >   
   > Having 64 registers and 64 bit registers makes life easier for that   
   > particular task :-)   
   >   
   > If you have that many bits available, do you still go for a load-store   
   > architecture, or do you have memory operations?  This could offset the   
   > larger size of your instructions.   
   >   
   >> Using operand routing for immediate constants and an operation size for   
   >> the instruction. Constants and operation size may be specified   
   >> independently. With 40-bit instruction words, constants may be 10,50,90   
   >> or 130 bits.   
   >   
   > Those sizes are not really a good fit for constants from programs,   
   > where quite a few constants tend to be 32 or 64 bits.  Would a   
   > 64-bit FP constant leave 26 bits empty?   
   >   
      
   Agreed.   
      
    From what I have seen, the vast bulk of constants tend to come in   
   several major clusters:   
      0 to 511: The bulk of all constants (peaks near 0, geometric fall-off)   
      -64 to -1: Much of what falls outside 0 to 511.   
      -32768 to 65535: Second major group   
      -2G to +4G: Third group (smaller than second)   
      64-bit: Another smaller spike.   
      
   For values between 512 and 16384: Sparsely populated.   
        Mostly the continued geometric fall-off from the near-0 peak.   
      Likewise for values between 65536 and 1G.   
      Values between 4G and 4E tend to be mostly unused.   
      
   Like, in the sense of, if you have 33-bit vs 52 or 56-bit for a   
   constant, the larger constants would have very little advantage (in   
   terms of statistical hit rate) over the 33 bit constant (and, it isn't   
   until you reach 64 bits that it suddenly becomes worthwhile again).   
      
      
   Partly why I go with 33 bit immediate fields in the pipeline in my core,   
   but nothing much bigger or smaller:   
   Slightly smaller misses out on a lot, so almost may as well drop back to   
   17 in this case;   
   Going slightly bigger would gain pretty much nothing.   
      
   Like, in the latter case, does sort of almost turn into a "go all the   
   way to 64 bits or don't bother" thing.   
      
      
   That said, I do use a 48-bit address space, so while in concept 48-bits   
   could be useful for pointers: This is statistically insignificant in an   
   ISA which doesn't encode absolute addresses in instructions.   
      
   So, ironically, there are a lot of 48-bit values around, just pretty   
   much none of them being encoded via instructions.   
      
      
   Kind of a similar situation to function argument counts:   
      8 arguments: Most of the functions;   
      12: Vast majority of them;   
      16: Often only a few stragglers remain.   
      
   So, 16 gets like 99.95% of the functions, but maybe there are a few   
   isolated ones taking 20+ arguments lurking somewhere in the code. One   
   would then need to go up to 32 arguments to have reasonable confidence   
   of "100%" coverage.   
      
   Or, impose an arbitrary limit, where the stragglers would need to be   
   modified to pass arguments using a struct or something.   
      
   ...   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca