From: Keith.S.Thompson+u@gmail.com   
      
   bart writes:   
   > On 25/11/2025 20:25, David Brown wrote:   
   [...]   
   >> Arbitrary sized integers are a very different kettle of fish from   
   >> large fixed-size integers, and are not something that would fit in   
   >> the C language - they need a library.   
   >   
   > Really? I wouldn't have thought there was any appreciable difference   
   > between the code for multiplying two 100,000-bit BitInts, and that for   
   > multiplying two abitrary-precision ints that happen to be 100,000   
   > bits.   
      
   It's not about the code that implements multiplication. In gcc, that's   
   done by calling a built-in function that can operate on arbitrary data   
   widths.   
      
   Think about memory management.   
      
   A _BitInt(128) object has a fixed size, like a struct. It can be   
   allocated locally ("on the stack"), passed to a function, returned   
   as a function result, used in expressions, etc. Likewise for   
   _BitInt(2048).   
      
   A hypothetical _BitInt(*) object would require an amount of storage   
   that varies with its current value. That storage would have to be   
   allocated using malloc() or equivalent, and deallocated using free()   
   or equivalent. C++ template classes with automatically invoked   
   constructors and destructors are great for that kind of thing.   
   C has no such mechanisms, and there's little support for adding   
   it just for this feature. (There are C container libraries.   
   I haven't used them, but they tend to require construction and   
   destruction to be explicit.)   
      
   Perhaps a future standard will provide a more flexible flavor of   
   _BitInt. It might allow the n in _BitInt(n) to be non-constant, or   
   empty, or "*", to denote an arbitrary-precision integer. But it's   
   hard to see how that could be done without adding other fundamental   
   features to the language. And a lot of people's response would be   
   that if you want C++, you know where to find it.   
      
   Similarly, C99 added complex types as a built-in language feature.   
   C++ added complex types as a template class, because C++ has language   
   features that support that kind of thing, including user-defined   
   literals.   
      
   If you can think of a way to add arbitrary-precision integers to C   
   without other radical changes to the language, let us know.   
      
   It could also be nice to be able to write code that deals with   
   multiple widths of _BitInt types, as we can do for arrays even   
   without VLAs. But C's treatment of arrays is messy, and I'm not   
   sure duplicating that mess for _BitInt types would be a great idea.   
   And I wouldn't want to lose the ability to pass _BitInt values   
   to functions.   
      
   [...]   
      
   > So, a better fit for a struct then? Here I'm curious as to what   
   > BitInt(128) brings to the table.   
      
   It brings a 128-bit integer type with constants and straightforward   
   assignment, comparison, and arithmetic operators.   
      
   [...]   
      
   > That BigInt() defaults to a signed integer (twos complement?), even   
   > for very large sizes suggests that /numeric/ applications are a   
   > primary use.   
      
   Yes, C23 requires two's-complement for signed integers. (It mandates   
   two's-complement representation, not wraparound behavior; signed   
   overflow is still UB).   
      
   [...]   
      
   > OK, so why are you not allowed to have _BitInt(1)? That is, a 1-bit   
   > signed integer. It might only have two values of 0 and -1; doesn't   
   > nobody want that particular combination?   
      
   I don't know. The language allows 1-bit signed bit-fields, so   
   _BitInt(1) would make some sense, but the language requires N to   
   be at least 1 for unsigned _BitInt and 2 for signed _BitInt.   
      
   It doesn't bother me too much, since I'm unlikely to have a   
   use for signed _BitInt(1). But it's an arbitrary restriction.   
   (And I thought you liked arbitrary restrictions.)   
      
   [...]   
      
   > At least, I've been able to add to my collection of C types that   
   > represent an 8-bit byte:   
   >   
   > signed char   
   > unsigned char   
   > int8_t   
   > uint8_t   
   > _BitInt(8)   
   > unsigned _BitInt(8)   
   >   
   > The last two are apparently incompatible with the char versions.   
      
   You forgot plain char, int_least8_t, and uint_least8_t. And of   
   course the char types are CHAR_BIT bits, not necessarily 8 bits.   
      
   It's mildly interesting that unsigned _BitInt(8) gives you a way to   
   define an octet even on systems with CHAR_BIT > 8. But of course an   
   unsigned _BitInt(8) object will still have a size of CHAR_BIT bits.   
   (Again, saving space on ordinary hardware isn't part of the rationale   
   for _BitInt types.)   
      
   --   
   Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com   
   void Void(void) { Void(); } /* The recursive call of the void */   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|