From: tr.17687@z991.linuxsc.com   
      
   pozz writes:   
      
   > After many years programming in C language, I'm always unsure if it is   
   > safer to use signed int or unsigned int.   
   >   
   > Of course there are situations where signed or unsigned is clearly   
   > better. For example, if the values could assume negative values,   
   > signed int is the only solution. If you are manipulating single bits   
   > (&, |, ^, <<, >>), unsigned ints are your friends.   
   >   
   > What about other situations? For example, what do you use for the "i"   
   > loop variable?   
      
   I used unsigned types unless there is a compelling reason to use   
   signed types. I used unsigned types for counts, array index values,   
   sizes, lengths, extents, limits of the above, bits and masks. The   
   most common reason for using signed types is for compatibility with   
   some system interface (most commonly, signed int).   
      
   There are cases where using an unsigned type rather than a signed   
   type requires more thought and care. To me that need is a net   
   positive rather than a negative.   
      
   > I recently activated gcc -Wsign-conversion option on a codebase and   
   > received a lot of warnings. I started to fix them, usually expliciting   
   > casting. Is it the way or is it better to avoid the warning from the   
   > beginning, choosing the right signed or unsigned type?   
      
   My experience with such warnings is they generate too many false   
   positives. I might turn on -Wsign-conversion every now and then   
   as a sanity check, but not all the time. The "cure" of changing   
   the code so the warnings go away is worse than the disease.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|