home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 242,162 of 243,242   
   David Brown to bart   
   Re: _BitInt(N) (1/3)   
   26 Nov 25 15:49:58   
   
   From: david.brown@hesbynett.no   
      
   On 26/11/2025 13:05, bart wrote:   
   > On 26/11/2025 07:55, David Brown wrote:   
   >> On 25/11/2025 22:58, bart wrote:   
   >>> On 25/11/2025 20:25, David Brown wrote:   
   >>>> On 24/11/2025 23:27, bart wrote:   
   >>>   
   >>>>> On interesting use-case for literals was short-strings; 128 bits   
   >>>>> allowed character literals up to 16 characters: 'ABCDEFGHIJKLMNOP'.   
   >>>>> I think C is still stuck at one, or 4 if you're lucky.)   
   >>>>>   
   >>>>   
   >>>> I have no idea or opinion on why /you/ might want 128-bit or larger   
   >>>> integer types.  I believe there is very little use for "normal"   
   >>>> numbers - things you might want to write as literals, calculate   
   >>>> with, and read or write - that won't fit perfectly well within 64   
   >>>> bit types, and would not be better served by arbitrary sized integers.   
   >>>   
   >>>   
   >>>>   Arbitrary sized integers are a very different kettle of fish from   
   >>>> large fixed-size integers, and are not something that would fit in   
   >>>> the C language - they need a library.   
   >>>   
   >>> Really? I wouldn't have thought there was any appreciable difference   
   >>> between the code for multiplying two 100,000-bit BitInts, and that   
   >>> for multiplying two abitrary-precision ints that happen to be 100,000   
   >>> bits.   
   >>>   
   >>   
   >> You are looking at things in completely the wrong way.   
   >>   
   >> Long before you start thinking of how to implement operations, think   
   >> about what the types are at a fundamental level.   
   >>   
   >> A fixed-size integer is a value type of fixed, compile-time size.  It   
   >> is passed around as a value.  Local instances can be put on a stack   
   >> with compile-time fixed offsets (and thus using [sp + N] access modes   
   >> in an implementation).  The type has a single simple and obvious   
   >> (albeit slightly implementation-dependent) bit representation.  A   
   >> _BitInt(32) will be identical at the low level to an int32_t.  Bigger   
   >> _BitInt types are just the same, only bigger.  There is no difference   
   >> in concept, or representation, whether the type is 32-bit or 32   
   >> million bits.   
   >>   
   >> An arbitrary sized integer is a dynamic type with variable size.  The   
   >> base object will hold information about pointers to data, sizes for   
   >> that stored data - including both how much is in use, and how much is   
   >> available.  There are endless ways to make such types - you can   
   >> support multiple allocation parts, or use a single contiguous   
   >> allocation.  You can store the data in binary, or some kind of packed   
   >> decimal, or other formats.  Passing them around might mean just   
   >> passing around the base object, but sometimes you need to make deep   
   >> copies.  Operations might lead to heap memory allocations or   
   >> deallocations.   
   >>   
   >> They are so /totally/ different that any similarities in the way you   
   >> do a particular arithmetic operation are completely incidental.   
   >   
   > But BitInts /will/ need runtime library support?   
      
   No, not if an implementation generates the code inline (as clang appears   
   to do).  An implementation /may/ use helper functions from a language   
   support library - gcc does that, depending on the sizes of the _BitInt   
   and the operations you are doing.  That is no different from all sorts   
   of other things in the language, and is not some external runtime   
   library.  Your code will not be calling "bigint.dll" or anything like that.   
      
   >   
   > I've acknowledged in my last post that arbitrary precision would have   
   > memory management issues, /if/ you wanted to add them to the language in   
   > such a way that, if variables 'a b c d' had such a type, you can write:   
   >   
   >     a = b + c * d;   
   >   
      
   Arbitrary precision integers have memory management issues no matter how   
   you want to use them.  They need dynamic memory.  Either the language   
   has some kind of automatic memory management (reference counting, RAII,   
   garbage collection, etc.), or it must be done manually.  It does not   
   matter if you use operator notation or function-call notation - except   
   that you cannot use operator notation with manual memory management.   
      
   > This is not what I had in mind; such arithmetic would use explicit   
   > function calls with explicit management of intermediates (like GMP).   
   >   
   > So from this point of view, fixed-size BitInts are better, but also a   
   > higher level ability than I would have considered added to the language.   
      
   _BitInt's are certainly better in that they are scalar types with value   
   semantics and no need for any dynamic memory.  Of course arbitrary   
   precision integers have other advantages.  Although for some use-cases   
   either would work, each can be significantly more appropriate for   
   different situations.   
      
   To my mind, the need for dynamic memory would mean arbitrary precision   
   integers are not appropriate for C - either at the core language level,   
   or as part of the standard library.  I think it is reasonable to have   
   different opinions as to how well fixed-size _BitInts are appropriate to   
   have in the C core language, though as they are now in C23, the point is   
   now moot.   
      
   >   
   > Even if BitInts were restricted to saner and smaller sizes, I'd consider   
   > actual arithmetic on 128 bits up to a few K bits and above a specialist,   
   > niche application.   
   >   
      
   Fair enough.   
      
   > But logic operations (== & | ^) on unsigned BitInts are more reasonable   
   > (because they implement some features of bit-sets).   
   >   
   > For arithmetic on considerably larger numbers, I still think arbitrary   
   > precision is the best bet.   
   >   
   >   
      
   Also fair enough.   
      
   I don't think anyone is likely to be multiplying million-bit _BitInts in   
   real code.  But I don't think it is appropriate for the language   
   standard to pick some arbitrary size and say "below that is fine, above   
   that is too big and programmers should use something else".  I don't   
   think it is appropriate for compiler implementers either.  (They may   
   pick limits based on how they implement things internally - that's not   
   an arbitrary limit.)  Different people have different needs, and no   
   particular limit fits all use-cases.   
      
   >>> Structs and arrays again spring to mind if you just want an anonymous   
   >>> data block. (I wonder why it has to be bit-precise for byte-addressed   
   >>> memory?)   
   >>>   
   >>   
   >> If I have a processor that has 256-bit vector registers, then moving   
   >> data by loading and storing 256-bit blocks is going to be more   
   >> efficient than doing a loop of 16 byte moves.  Today, I would use   
   >> uint64_t for the task, as the biggest type available.  Why does it   
   >> have to be bit- precise?  It must be bit-precise because I would want   
   >> to move 256 bits - not 255 bits or 257 bits.   
   >   
   > By bit-precise I mean being able to specify 255 and 257 bits! Memory is   
   > usually expression in bytes or words; not bits.   
   >   
      
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca