From: bc@freeuk.com   
      
   On 07/02/2026 17:55, Kaz Kylheku wrote:   
   > On 2026-02-05, Bart wrote:   
   >> On 05/02/2026 22:55, Janis Papanagnou wrote:   
   >>> On 2026-02-05 18:42, Bart wrote:   
   >>>> On 05/02/2026 11:41, David Brown wrote:   
   >>>>>   
   >>>>> No, the /compiler/ has to work it out. Whether /you/ need to work it   
   >>>>> out or not, depends on what you are doing with the result.   
   >>>>   
   >>>> The compiler will not tell you the format codes to use!   
   >>>   
   >>> Well, it seems the compiler I have here does it quite verbosely...   
   >>>   
   >>>   
   >>> $ cc -o prtfmt prtfmt.c   
   >>> prtfmt.c: In function ‘main’:   
   >>> prtfmt.c:8:19: warning: format ‘%d’ expects argument of type   
   ‘int’, but   
   >>> argument 2 has type ‘double’ [-Wformat=]   
   >>> 8 | printf ("%d\n", f);   
   >>> | ~^ ~   
   >>> | | |   
   >>> | int double   
   >>> | %f   
   >>> prtfmt.c:9:19: warning: format ‘%f’ expects argument of type   
   ‘double’,   
   >>> but argument 2 has type ‘int’ [-Wformat=]   
   >>> 9 | printf ("%f\n", i);   
   >>> | ~^ ~   
   >>> | | |   
   >>> | | int   
   >>> | double   
   >>> | %d   
   >>>   
   >>>   
   >>> ...giving information of every kind - here for two basic types, but   
   >>> it has also the same verbose diagnostics with the '_t' types I tried   
   >>> (e.g. suggesting '%ld' for a 'time_t' argument).   
   >>>   
   >>> Note: I'm still acknowledging the unfortunate type/formatter-coupling   
   >>> notwithstanding.   
   >>   
   >> /Some/ compilers with /some/ options will /sometimes/ tell you when   
   >> you've got it wrong.   
   >   
   > That's an excellent reason to keep the bulk of your code portable, and   
   > offer it to multiple compilers.   
   >   
   > I think the only way you are going to run into a crappy compiler in a   
   > real job situation in 2026 is if you're an embedded developer working   
   > with some very proprietary processor for which the only compiler comes   
   > from its vendor. Even so the bits of your code not specific to that   
   > chip can be compiled with something else. Which you want to do not just   
   > for diagnostics but to be able to run unit tests on that code on a   
   > regular developer machine.   
   >   
   >> Eventually, it will compile. Until someone else builds your program,   
   >> using a slightly different set of headers where certain types are   
   >> defined, and then it might either give compiler messages that they have   
   >> to fix, or it show wrong results.   
   >>   
   >> If I compile this code with 'gcc -Wall -Wextra -Wpedantic':   
   >>   
   >> #include    
   >>   
   >> int main() {   
   >> int a = -1;   
   >> printf("%u", a);   
   >> }   
   >>   
   >> it says nothing. The program displays 4294967295 instead of -1.   
   >   
   > For that you need this:   
   >   
   > $ gcc -Wall -pedantic -W -Wformat -Wformat-signedness printf.c   
      
   -Wformat-signedness, of course! Sorry I just don't believe in   
   micro-managing a compiler's job to that extent.   
      
   Here's my code: is it valid C or not? It's a yes or no answer.   
      
   If I wanted a more nuanced or a speculative opinion about my code, I'd   
   use a tool that wasn't called a compiler, and I would not use it for   
   every routine build.   
   > There is probably a good reason for that; passing a signed argument   
   > to an unsigned conversion specifier de facto works fine, and   
   > some code relies on it; i.e. the 4294967295 is what the programmer   
   > wanted.   
      
   Then they can add a cast.   
      
   >   
   > You often see that with %x, which also takes unsigned int;   
   > the programmer wants -16 to come out as "FFFFFFF0", and not -10.   
      
   That's whay you get with %x anyway; I've never seen it produce a   
   negative hex number.   
      
   (My systems language can produce negative hex results, unless told to   
   treat as unsigned. Example:   
      
    i64 a := -1   
    u64 b := -1   
      
    println a:"h" # -1   
    println a:"hu" # FFFFFFFFFFFFFFFF   
    println b:"h" # FFFFFFFFFFFFFFFF)   
      
      
   > Someone with code like that might want to catch other problems with   
   > printf calls, and not be bothered with those.   
   >   
   >> If compile this version (using %v) using a special extension:   
   >>   
   >> #include    
   >>   
   >> int main() {   
   >> int a = -1;   
   >> printf("%v", a);   
   >> }   
   >>   
   >> it shows -1. Which is better?   
   >   
   > Both are undefined behavior. The latter is a documented extension   
   > that works where it works, which is good.   
   >   
   > Using %u with int /de facto/ works (and could also be a documented   
   > extension).   
   >   
   > /de facto/ is weaker than documented. But on the other hand, /de facto/   
   > works in more places than %v.   
   >   
   > If you hit a library that doesn't have %v, it doesn't work at all.   
      
   The %v refered to one of my old implementations.   
      
   There was handled within the compiler, so worked with any library.   
      
   It was limited to format strings that are constants, but then so are   
   -Wformat-signedness etc.   
      
      
   > I've never seen int passed to %d or %x not work in the manner you   
   > would expect if int and unsigned int arguments were passed in   
   > exactly the same way and subject to a reinterpretation of the bits.   
   >   
      
   I've been in the situation where I was unsure if the result of an   
   expression was signed or unsigned. So if the top if is set, I want to   
   see if it was printed as a negative value (so signed), or as positive.   
      
   %u or %x won't help here. I head to resort to casting to float then   
   using %f.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|