home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 242,283 of 243,242   
   bart to David Brown   
   Re: _BitInt(N)   
   01 Dec 25 11:29:23   
   
   From: bc@freeuk.com   
      
   On 01/12/2025 10:33, David Brown wrote:   
   > On 30/11/2025 18:55, bart wrote:   
   >> On 30/11/2025 17:17, Janis Papanagnou wrote:   
   >>> On 2025-11-30 13:51:22, bart wrote:   
   >   
   >    
   >   
   >>   
   >> I've said many times that it's a poorly designed feature.   
   >   
   > You have said continuously that you think everything about C, along with   
   > everything you believe to be related to C (however tenuous the   
   > connection may be in reality) is poorly designed.   
      
   With C it is 75% about syntax. Some of it is about type systems, but   
   mostly it is similar to what I have. The new _BitInt (which as you know   
   I don't much like) makes the divergence greater.   
      
      
     It seems that   
   > absolutely everything that you did not design personally, is poorly   
   > designed in your eyes.  I'm not even sure you think your own languages   
   > are well designed, given the number of times you've found new features   
   > or limitations that you didn't know they had.   
      
   Yeah, I'm a bit of a perfectionist. So what?   
      
   > Perhaps you just have a different scale of what is "poor design" and   
   > "good design".  Or maybe you don't understand that things don't have to   
   > be perfect to be good enough in practice.   
      
   And my own designs are also full of compromises. One big one is that the   
   systems language knows its place; I keep it simple and don't try and   
   make it one or two levels higher. Other modern 'systems' language are   
   much higher level, but also harder to use, more complicated, and less   
   efficient to process.   
      
   >  You certainly don't seem to   
   > understand that when there is more than one person involved - and for C,   
   > there are millions involved - compromises are inevitable, and elegance   
   > of design must bow to compatibility requirements.   
   >   
   >> Read the thread, as I'm not going to repeat things.   
      
   >   
   > Is that a promise?   
      
   How a look at the first post I made in the thread. I've no idea how to   
   do links, so a copy of it is pasted below.   
      
   My other posts are mostly defending that view.   
      
      
   > The answer, it seems, is that many people do think they can use _BitInt   
   > to make their code better in some way.  It doesn't matter if one person   
   > thinks _BitInt(128) will be useful, while another thinks _BitInt(12) is   
   > something they'd use.   
      
   How much of this feature came about because of LLVM's support for   
   integer types up to 2**23 or 2**24 /bits/? I thought /that/ was crass.   
      
   > It doesn't matter if they will use them for FPGA   
   > programming, small-systems embedded programming, cryptography, neater   
   > bitfield structs, or whatever.  And most importantly, it does not matter   
   > in the slightest if someone does /not/ want to use a particular size of   
   > _BitInt.   
      
   That's like saying we should all be using C++ compilers for C programs,   
   and just ignore all the features we don't want.   
      
   So why /do/ C-only compilers still exist?   
      
   =============================================================================   
      
   bart:   
   On 23/11/2025 13:32, Waldek Hebisch wrote:   
    > Philipp Klaus Krause  wrote:   
    >> Am 22.10.25 um 14:45 schrieb Thiago Adams:   
    >>>   
    >>>   
    >>> Is anyone using or planning to use this new C23 feature?   
    >>> What could be the motivation?   
    >>>   
    >>>   
    >>   
    >> Saving memory by using the smallest multiple-of-8 N that will do.   
    >   
    > IIUC nothing in the standard says that it is smallest multiple-of-8.   
    > Using gcc-15.1 on AMD-64 is get 'sizeof(_BitInt(22))' equal to 4,   
    > while the number cound fit in 3 bytes.   
      
   The rationale mentions a use-case where there is a custom processor that   
   might actually have a 22-bit hardware types.   
      
   Implementing such odd-size types on regular 8/16/32/64-bit hardware is   
   full of problems if you want to do it without padding (in order to get   
   the savings). On even with padding (to get the desired overflow semantics).   
      
   Such as working out how pointers to them will work.   
      
      
    >> Also   
    >> being able to use bit-fields wider than int.   
    >   
    > For me main gain is reasonably standard syntax for integers bigger   
    > that 64 bits.   
      
   Standard syntax I guess would be something like int128_t and int256_t.   
   Such wider integers tend to be powers of two.   
      
   But there are two problems with _BitInt:   
      
   * Any odd sizes are allowed, such as _BitInt(391)   
      
   * There appears to be no upper limit on size, so _BitInt(2997901) is a   
   valid type   
      
   So what is the result type of multiplying values of those two types?   
      
   Integer sizes greater than 1K or 2K bits should use an arbitrary   
   precision type (which is how large _BitInts will likely be implemented   
   anyway), where the precision is a runtime attribute.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca