From: Keith.S.Thompson+u@gmail.com   
      
   bart writes:   
   > On 30/11/2025 00:46, Keith Thompson wrote:   
   >> bart writes:   
   >>> On 29/11/2025 20:24, Waldek Hebisch wrote:   
   >>>> bart wrote:   
   >>>>> On 24/11/2025 20:26, David Brown wrote:   
   >>>>>> On 24/11/2025 19:35, bart wrote:   
   >>>>>   
   >>>>>>> But now there is this huge leap, not only to 128/256/512/1024 bits,   
   >>>>>>> but to conceivably millions, plus the ability to specify any weird   
   >>>>>>> type you like, like 182 bits (eg. somebody makes a typo for   
   >>>>>>> _BitInt(128), but they silently get a viable type that happens to be a   
   >>>>>>> little less efficient!).   
   >>>>>>>   
   >>>>>>   
   >>>>>> And this huge leap also lets you have 128-bit, 256-bit, 512-bit, etc.,   
   >>>>>   
   >>>>> And 821 bits. This is what I don't get. Why is THAT so important?   
   >>>>>   
   >>>>> Why couldn't 128/256/etc have been added first, and then those funny   
   >>>>> ones if the demand was still there?   
   >>>>>   
   >>>>> If the proposal had instead been simply to extend the 'u8 u16 u32 u64'   
   >>>>> set of types by a few more entries on the right, say 'u128 u256 u512',   
   >>>>> would anyone have been clamouring for types like 'u1187'? I doubt it.   
   >>>>>   
   >>>>> For sub-64-bit types on conventional hardware, I simply can't see the   
   >>>>> point, not if they are rounded up anyway. Either have a full range-based   
   >>>>> types like Ada, or not at all.   
   >>>> First, _BitInt(821) (and _BitInt(1187)) are really unimportant. You   
   >>>> simple get them as a byproduct of general rules.   
   >>>   
   >>> That they are allowed is the problem. People use them and expect the   
   >>> compiler to waste its time generating bit-precise code.   
   >> You are literally the only person I've seen complain about it. And   
   >> you can avoid any such problem by not using unusual sizes in your   
   >> code.   
   >>   
   >> You want to impose your arbitrary restrictions on the rest of us.   
   >>   
   >> Do you even use _BitInt types?   
   >>   
   >> Oh no, I can type (n + 1187), and it will yield the sum of n and   
   >> 1187. Why would anyone want to add 1187 to an integer? The language   
   >> should be changed (made more complicated) to forbid operations that   
   >> don't make obvious sense!!   
   >   
   > You seem to be mixing up values and types. Or are arguing for there to   
   > be nearly as many integer types as possible values.   
      
   You know that I understand the distinction between value and types.   
      
   I used (n + 1187) as an example of something that's not obviously   
   useful, but that is not the basis of a good argument that it should   
   be forbidden.   
      
   > Everyone in this group seems obsessed with not having any limitations   
   > at all in the language.   
      
   That's not true at all.   
      
   Let me be clear about what I've been saying. If C23 had introduced   
   _BitInt types with the restrictions you want, I likely wouldn't have   
   complained. I'm still not sure just what restrictions you want, but for   
   example, it could have required support for all widths up to the width   
   of uintmax_t (typically, but not necessarily, 64 bits), and multiples of   
   the width of uintmax_t up to an implementation-defined limit. Or maybe   
   multiples of CHAR_BIT. I would have been ok with that.   
      
   C23's definition is more flexible than that. That flexibility has   
   apparently not caused problems for implementators (in fact clang   
   has a flexible version of the feature before it was added to C23).   
      
   And it's part of the ISO C standard, which makes it very difficult to   
   change.   
      
   I find it interesting that the C23 standard requires support for   
   *all* widths up to BITINT_MAXWIDTH, and that gcc and clang have   
   implemented that support. I also find it interesting that sdcc   
   sets BITINT_MAXWIDTH to 64, which is also perfectly valid (and   
   happens to be consistent with the restrictions you want to impose,   
   if I understand them correctly).   
      
   What I usually discuss here is what the C standard actually says   
   and how to use it.   
      
   You, on the other hand, see a new feature and are offended by it   
   because it's more flexible than you think it should be.   
      
   > For example, gcc allows identifiers up to 4 billion characters along,   
   > or something (I think I've tested it with three 1-billion-character   
   > variables.)   
   >   
   > There was a discussion here about it. Of course, even   
   > million-character names would be totally impractical to work with. I'd   
   > have trouble with 256 characters (my own cap).   
      
   I haven't looked into this particular case, but I presume that the   
   implementers of gcc chose to implement their lexical analysis in   
   a way that does not implies a fixed limit on identifier lengths.   
   Using a billion-character identifier would be silly, but the gcc   
   developers apparently felt no need to go out of their way to forbid   
   such identifiers. I have no problem with that, and I don't know   
   why you do.   
      
   Doctor, it hurts when I do this.   
      
   > The rationale for BitInts seems to be heading the same way. The work   
   > for billion-character variables as already 'been done'. That doesn't   
   > mean they are sensible or practical or efficient!   
      
   They are practical, in the sense that working implementations exist.   
      
   If you don't find them sensible, don't use them.   
      
   There are inefficiencies in at least one existing implementation, which   
   I expect to be corrected (there's an open bug report). Other than that,   
   what inefficiencies are you concerned about?   
      
   Do you really believe that the fact that you don't find something   
   useful or sensible means that it should be forbidden?   
      
   >>> You can have general _BitInt(N) syntax and have constraints on the   
   >>> values of N, not just an upper limit.   
   >>   
   >> No you can't, because the language does not allow the arbitrary   
   >> restrictions you want. If an implementer finds _BitInt(1187)   
   >> too difficult, they can set BITINT_MAXWIDTH to 64.   
   >>   
   >> One more time: Both gcc and llvm/clang have already implemented   
   >> bit-precise types, with very large values of BITINT_MAXWIDTH.   
   >> What actual problems has this fact caused for you, other than giving   
   >> you something to complain about?   
   >   
   > What problem would there be if BitInt sizes above the machine word   
   > sizes had to be multiples of the word sizes?   
      
   You didn't answer my question.   
      
   > It what way would it inconvenience /you/?   
      
   Possibly none. In what way would it inconvenience anyone else?   
   I don't know.   
      
   But I'm certain that imposing the restrictions you want would   
   inconvenience the maintainers of gcc and clang.   
      
   > I just don't unlike unnecessarily flexible, lax or over-ambitious   
   > features in a language. I think that is as much poor design as   
   > underspecifying.   
   >   
   > So I'm interested in what that one extra bit in a million buys you. Or   
   > that one bit fewer.   
      
   Flexibility and simplicity of the language definition.   
      
   You want to impose restrictions on the value of N in _BitInt(N)   
   or unsigned _BitInt(N). How much work would be required to define   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|