home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 242,107 of 243,242   
   David Brown to bart   
   Re: _BitInt(N)   
   24 Nov 25 21:26:53   
   
   From: david.brown@hesbynett.no   
      
   On 24/11/2025 19:35, bart wrote:   
   > On 24/11/2025 14:41, David Brown wrote:   
   >> On 24/11/2025 13:31, bart wrote:   
   >   
   >> That's all up to the implementation.   
   >>   
   >> You are worrying about completely negligible things here.   
   >   
   > Is it that negligible? That's easy to say when you're not doing the   
   > implementing!   
      
   Of course I am not implementing it.  As always with features in C, no   
   one is particularly bothered about how much effort is needed by the   
   implementers.  The prime concern is always the compiler users, not the   
   compiler writers.   
      
   > However it may impact on the size and performance of code.   
      
   The impact of an extra mask operation when you are handling 6 (IIRC)   
   chunks of 64-bit data is not going to give a very significant effect on   
   the size or performance of the code.   
      
   >   
   >   
   >>> And allowing random sizes such as int817838_t. (See, it seems much   
   >>> sillier using this syntax!)   
   >>   
   >> I had taken your "ridiculous" comment to be part of your complaint   
   >> that "multiplying even two one-million-bit types could overflow".  But   
   >> those statements are independent, then only the first is silly - of   
   >> course arithmetic on any finite sized type can overflow unless   
   >> specifically limited (such as by wrapping behaviour for unsigned   
   >> types).  I agree that huge fixed-size integer types are not useful,   
   >> though I am not sure where the ideal limit lies.   
   >   
   > You don't think it strange that C doesn't even have a 128-bit type yet   
   > (it only barely has width-specific 64-bit ones).   
      
   How do you that I think that, from what I wrote?  You are just making   
   stuff up again.   
      
   I think a 128-bit type can be useful.  Many C compilers support one, and   
   now the standard supports one too.  It's called "_BitInt(128)", and you   
   can expect it to perform exactly like __int128 or whatever   
   compiler-specific 128-bit types you might have in a given tool.   
      
   >   
   > There is just the poor gnu extension where 128-bit integers didn't have   
   > a literal form, and there was no way to print such values.   
   >   
      
   How many times have you felt the need to write a 128-bit literal?  And   
   how many times has that literal been in decimal (it's not difficult to   
   put together a 128-bit value from two 64-bit values)?  You really are   
   making a mountain out of a molehill here.   
      
   > But now there is this huge leap, not only to 128/256/512/1024 bits, but   
   > to conceivably millions, plus the ability to specify any weird type you   
   > like, like 182 bits (eg. somebody makes a typo for _BitInt(128), but   
   > they silently get a viable type that happens to be a little less   
   > efficient!).   
   >   
      
   And this huge leap also lets you have 128-bit, 256-bit, 512-bit, etc.,   
   types with no more than a simple typedef if you don't like the names.  I   
   can't see your problem here.   
      
   > So, 20 years of having 64-bit processors with little or no support for   
   > even double-word types, and now there is this explosion in capabilities.   
   >   
   > Or, are literals and print facilities for these new types still missing?   
   >   
   > Personally I think they should have got the basics right first, like a   
   > decent 128-bit type, proper literals, and ways to print.   
   >   
   > This looks like VLAs all over again (eg. is '_BitInt(1000000) A'   
   > allocated on the stack?). A poorly suited, hard-to-implement feature.   
   >   
      
   You are joking, right?  How is dealing with a _BitInt(1000000) any more   
   difficult than dealing with a "struct { uint64_t chunks[125000]; }" ?   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca