From: bc@freeuk.com   
      
   On 25/11/2025 23:20, Keith Thompson wrote:   
   > bart writes:   
   >> On 25/11/2025 20:25, David Brown wrote:   
   > [...]   
   >>> Arbitrary sized integers are a very different kettle of fish from   
   >>> large fixed-size integers, and are not something that would fit in   
   >>> the C language - they need a library.   
   >>   
   >> Really? I wouldn't have thought there was any appreciable difference   
   >> between the code for multiplying two 100,000-bit BitInts, and that for   
   >> multiplying two abitrary-precision ints that happen to be 100,000   
   >> bits.   
   >   
   > It's not about the code that implements multiplication. In gcc, that's   
   > done by calling a built-in function that can operate on arbitrary data   
   > widths.   
   >   
   > Think about memory management.   
      
   Well, I was responding to a suggestion that BitInt support didn't need a   
   library.   
      
   But memory management is a good point. Actual, variable-sized bigints   
   would be awkward in C if you want to use them in ordinary expressions.   
      
   Although managing large fixed-sized types, which may also involve   
   intermediate, transient values, can have their own problems.   
      
      
      
   > Perhaps a future standard will provide a more flexible flavor of   
   > _BitInt. It might allow the n in _BitInt(n) to be non-constant, or   
   > empty, or "*", to denote an arbitrary-precision integer. But it's   
   > hard to see how that could be done without adding other fundamental   
   > features to the language. And a lot of people's response would be   
   > that if you want C++, you know where to find it.   
      
   I think I would have responded better to BitInt if presented as a   
   'bit-set', effectively a fixed-size bit-array, but passed by value.   
   This is something that I'd considered myself at one time.   
      
   Those would have logical operators, access to indvidual bits, but not   
   arithmetic nor shifts, and no notion of twos complement. (In my   
   implementation, they could also have been initialised like Pascal bitsets.)   
      
   More significantly, an unbounded version could be passed by reference,   
   with an accompanying length (I could also use slices that have the   
   length) as happens with arrays in C.   
      
   > Similarly, C99 added complex types as a built-in language feature.   
   > C++ added complex types as a template class, because C++ has language   
   > features that support that kind of thing, including user-defined   
   > literals.   
   >   
   > If you can think of a way to add arbitrary-precision integers to C   
   > without other radical changes to the language, let us know.   
      
   I have considered adding my actual arbitrary precision library to my   
   systems language. It would have been superfical (such types would not be   
   nestable within other data structures), but would have been simpler to   
   use than function calls.   
      
   Some degree of automatic memory management would have been needed   
   (initialise locals on function entry, free on exit, deal with   
   intermediates), but not on the C++ scale due to the restrictions.   
      
   But I rejected that as being too high-level a feature, and my use-cases   
   more suitable for a scripting language.   
      
      
   > It could also be nice to be able to write code that deals with   
   > multiple widths of _BitInt types, as we can do for arrays even   
   > without VLAs. But C's treatment of arrays is messy, and I'm not   
   > sure duplicating that mess for _BitInt types would be a great idea.   
   > And I wouldn't want to lose the ability to pass _BitInt values   
   > to functions.   
   >   
   > [...]   
   >   
   >> So, a better fit for a struct then? Here I'm curious as to what   
   >> BitInt(128) brings to the table.   
   >   
   > It brings a 128-bit integer type with constants and straightforward   
   > assignment, comparison, and arithmetic operators.   
      
   I was commenting on the ipv6 example, where structs give you that   
   already, except arithmetic which makes little sense.   
      
      
   > [...]   
   >   
   >> That BigInt() defaults to a signed integer (twos complement?), even   
   >> for very large sizes suggests that /numeric/ applications are a   
   >> primary use.   
   >   
   > Yes, C23 requires two's-complement for signed integers. (It mandates   
   > two's-complement representation, not wraparound behavior; signed   
   > overflow is still UB).   
      
   Even though it will now likely be under software control? OK.   
      
   >> At least, I've been able to add to my collection of C types that   
   >> represent an 8-bit byte:   
   >>   
   >> signed char   
   >> unsigned char   
   >> int8_t   
   >> uint8_t   
   >> _BitInt(8)   
   >> unsigned _BitInt(8)   
   >>   
   >> The last two are apparently incompatible with the char versions.   
   >   
   > You forgot plain char,   
      
   I had char but took it out, as it's a outlier.   
      
   > int_least8_t, and uint_least8_t.   
      
   And 'fast' versions? I still don't know what any of these mean! No other   
   languages seem to have bothered.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|