From: Keith.S.Thompson+u@gmail.com   
      
   bart writes:   
   > On 24/11/2025 14:41, David Brown wrote:   
   >> On 24/11/2025 13:31, bart wrote:   
   >> That's all up to the implementation.   
   >> You are worrying about completely negligible things here.   
   >   
   > Is it that negligible? That's easy to say when you're not doing the   
   > implementing! However it may impact on the size and performance of   
   > code.   
      
   You're right, it's easy to say when I'm not doing the implementing.   
   Which I'm not.   
      
   The maintainers of gcc and llvm/clang have done that for me, so I don't   
   have to worry about it.   
      
   Are you planning to implement bit-precise integer types yourself? I   
   don't think you've said so in this thread. If you are, you have at   
   least two existing implementations you can look at for ideas.   
      
   [...]   
      
   > You don't think it strange that C doesn't even have a 128-bit type yet   
   > (it only barely has width-specific 64-bit ones).   
      
   C doesn't *require* 128-bit types. It certainly allows them. A C90   
   implementation could in principle have had 128-bit long, and a C99 or   
   later implementation can have 128-bit long and/or an extended 128-bit   
   type.   
      
   As of C99 or C11, *requiring* support for 128-bit integers probably   
   wouldn't have been reasonable.   
      
   Please distinguish between the language and implementations.   
      
   > There is just the poor gnu extension where 128-bit integers didn't   
   > have a literal form, and there was no way to print such values.   
   >   
   > But now there is this huge leap, not only to 128/256/512/1024 bits,   
   > but to conceivably millions, plus the ability to specify any weird   
   > type you like, like 182 bits (eg. somebody makes a typo for   
   > _BitInt(128), but they silently get a viable type that happens to be a   
   > little less efficient!).   
      
   Yes. With the addition of bit-precise types, gcc's __int128 might be   
   obsolete (though there's bound to be existing code that depends on it).   
   I can imagine that gcc might make __int128 an alias for _BitInt(128).   
      
   > So, 20 years of having 64-bit processors with little or no support for   
   > even double-word types, and now there is this explosion in   
   > capabilities.   
      
   Those 20 years are in the past. Not much we can do about that now.   
      
   Seriously, is your problem with _BitInt types that they're too flexible?   
   What advantage do you expect from imposing additional restrictions on   
   a feature that has already been defined and implemented?   
      
   > Or, are literals and print facilities for these new types still missing?   
      
   C23 has literals for bit-precise integer types, using a "wb" or "WB"   
   suffix. That's something you could have found out by reading the N3220   
   C23 draft, or by reading one of my posts earlier in this thread. But I   
   don't mind answering questions.   
      
   There doesn't seem to be printf/scanf support for bit-precise integer   
   types, which is a little disappointing. But since they're all distinct   
   types, it could be difficult to define.   
      
   > Personally I think they should have got the basics right first, like a   
   > decent 128-bit type, proper literals, and ways to print.   
      
   No language changes would be necessary to support 128-bit integer types.   
   Implementations are free to support [u]int128_t and/or to make long long   
   128 bits.   
      
   It would have been nice if gcc's __int128 had been developed further,   
   but for whatever reason that didn't happen. (Maybe there wasn't enough   
   demand.)   
      
   > This looks like VLAs all over again (eg. is '_BitInt(1000000) A'   
   > allocated on the stack?). A poorly suited, hard-to-implement feature.   
      
   It doesn't look particularly like VLAs to me. The width is a   
   compile-time constant. Allocating large _BitInt objects is no   
   harder or easier than allocating large struct objects.   
      
   Here's an idea. Rather than asserting that _BitInt(1'000'000)   
   is silly and obviously useless, try *asking* how it's useful.   
   I personally don't know what I'd do with a million-bit integer,   
   but maybe somebody out there has a valid use for it. Meanwhile,   
   its existence doesn't bother me.   
      
   My guess is that once you've implemented integers wider than 128   
   or 256 bits, million-bit integers aren't much extra effort.   
      
   --   
   Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com   
   void Void(void) { Void(); } /* The recursive call of the void */   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|