From: cr88192@gmail.com   
      
   On 10/10/2025 10:47 AM, David Brown wrote:   
   > On 10/10/2025 16:28, Michael S wrote:   
   >> On Fri, 10 Oct 2025 12:06:10 +0200   
   >> David Brown wrote:   
   >>   
   >>> On 10/10/2025 08:27, BGB wrote:   
   >>>> On 10/9/2025 10:59 PM, Keith Thompson wrote:   
   >>>>> bart writes:   
   >>>   
   >>>>>>> One merit is if code can be copy-pasted, but if one has to change   
   >>>>>>> all instances of:   
   >>>>>>> char *s0, *s1;   
   >>>>>>> To:   
   >>>>>>> char* s0, s1;   
   >>>>>>> Well, this is likely to get old, unless it still uses, or allows   
   >>>>>>> C style declaration syntax in this case.   
   >>>>>>   
   >>>>>> That one's been fixed (50 years late): you instead write:   
   >>>>>>   
   >>>>>> typeof(char*) s0, s1;   
   >>>>>>   
   >>>>>> But you will need an extension if it's not part of C23.   
   >>>>>   
   >>>>> Yes, that will work in C23, but it would never occur to me to   
   >>>>> write that. I'd just write `char *s0, *s1;` or, far more likely,   
   >>>>> define s0 and s1 on separate lines. Using typeof that way triggers   
   >>>>> my WTF filter.   
   >>>>   
   >>>> Agreed.   
   >>>>   
   >>>>   
   >>>>   
   >>>> I think it can be contrast with C# style syntax (with "unsafe")   
   >>>> where one would write:   
   >>>> char* s0, s1;   
   >>>   
   >>> Does C# treat s1 as "char*" in this case? That sounds like an   
   >>> extraordinarily bad design decision - having a syntax that is very   
   >>> like the dominant C syntax yet subtly different.   
   >>>   
   >>   
   >> Generally, I disagree with your rule. Not that it makes no sense at   
   >> all, but sometimes a violation has more sense. For example, I strongly   
   >> prefer for otherwise C-like languages to parse 011 literal as decimal   
   >> 11 rather than 9.   
   >   
   > I did not intend to describe a general rule (and I agree with you in   
   > regard to octal).   
   >   
      
   Yeah, '0' by itself indicating octal is weird, so I might agree here.   
    123 //decimal   
    0123 //maybe reinterpret as decimal?   
    0o123 //octal   
    0x123 //hexadecimal   
    0b101 //binary   
      
   In BGBCC, had defined some additional handling for suffixes:   
    iNN, where NN is an integer, specifies a number of bits.   
    uNN or uiNN, specifies a number of bits, but unsigned.   
    Types could specify non-power-of-2 widths (understood as _BitInt).   
      
      
   Though, there was also the wonk that these literals could also allow X   
   and Z in place of bits or hex digits, but this was more a side-effect of   
   a fizzled effort to try to add Verilog support to BGBCC (which was also   
   sort of where the bit notation came from).   
      
   Though, generally, X and Z have no real purpose in C code though (and   
   may not exist in actual integer values), so would be little more than a   
   curiosity (with some of this more as stuff intended to try to test out   
   functionality being added for sake of trying to support Verilog).   
      
   But, as noted, in a few cases, the Verilog mechanisms can offer a   
   performance advantage over traditional C constructs. In other cases, not   
   so much....   
      
   This was being worked on at one point as I sometimes face frustration at   
   the almost non-existent debugging features in Verilator (you basically   
   have to do a more awkward form of printf debugging; would kinda be nice   
   sometimes if one could set breakpoints and inspect variables, ...).   
      
   But, what passes for control-flow in Verilog doesn't really map over so   
   well (basically need to update stuff based on "sensitivity graph" mostly   
   driven by clock signals and similar).   
      
      
   >>   
   >> In this particular case it's more subtle.   
   >> What makes it a non-issue in practice is the fact that pointers is C# is   
   >> very rarely used expert-level feature, especially so after 7 or 8   
   >> years ago the language got slices (Span).   
   >> A person that decides to use C# pointers has to understand at least   
   >> half a dozen of more arcane things than this one.   
   >> Also it's very unlikely in case somebody made such mistake that his   
   >> code will pass compilation. After all, we're talking about C# here, not   
   >> something like Python.   
   >>   
   >   
   > Sure.   
   >   
   > It would seem to me, however, that it would have been better for the C#   
   > designers to pick a different syntax here rather than something that   
   > looks like C, but has subtle differences that are going to cause newbies   
   > confusion when they try to google for explanations for their problems.   
   > For example, if raw pointers are rarely used, then they should perhaps   
   > be accessible using a more verbose syntax than a punctuation mark -   
   > "ptr s0, s1;" might work.   
   >   
   > However, I have no experience with C#, and don't know the reasons for   
   > its syntax choices.   
   >   
      
   Early on, it didn't have generics and so wouldn't use that syntax.   
      
   Unlike C++, it doesn't have templates, so "ptr" would not make so   
   much sense, and then 'ptr' would be limited to being a class instance,   
   which are always by-reference. Also early on, no operator overloading   
   either (as with generics, this part was added later).   
      
      
   Also the language discouraged pointers anyways, so you had to opt-in by   
   using the 'unsafe' keyword before the compiler would allow them (and   
   then, only for 'trusted' executables).   
      
   Though, for a hybrid language, would likely drop the concept of trusted   
   executables (or, allow it, maybe with the added constraint that object   
   lifetimes be statically-provable; maybe asking too much though).   
      
   The concept of trusted executables doesn't make as much sense with   
   native-code compilation.   
      
      
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|