From: Keith.S.Thompson+u@gmail.com   
      
   pozz writes:   
   > After many years programming in C language, I'm always unsure if it is   
   > safer to use signed int or unsigned int.   
   >   
   > Of course there are situations where signed or unsigned is clearly   
   > better. For example, if the values could assume negative values,   
   > signed int is the only solution. If you are manipulating single bits   
   > (&, |, ^, <<, >>), unsigned ints are your friends.   
   >   
   > What about other situations? For example, what do you use for the "i"   
   > loop variable?   
      
   I usually use int (certainly for iterating over argc/argv), but   
   sometimes size_t. size_t is typically the most correct type for   
   representing sizes or counts of objects in memory, but int is a   
   bit easier to work with.   
      
   Both signed and unsigned types are (usually) used to model subranges   
   of the unbounded mathematical integers. If none of your operations   
   yields results outside the range of the type you're using, you're   
   safe -- but ensuring you don't stray outside that range can be easy   
   or difficult. If you're counting no more than a few thousand items,   
   int is fine. If you're counting bytes in a file or pennies in the   
   national debt, you have to think about just what range of values   
   you need to handle.   
      
   The thing about unsigned types is that they have a discontinuity at   
   0, which is much easier to run into than signed int's discontinuties   
   at INT_MIN and INT_MAX. Subtraction in particular can easily yield   
   mathematically incorrect results for unsigned types (unless your   
   problem domain actuall calls for modular arithmetic).   
      
   If you start with a value of type size_t, say from sizeof or   
   strlen(), it's probably best to stick with size_t for any derived   
   values. My vague impression is that most things that should use   
   unsigned types should use size_t (there are of course plenty of   
   exceptions).   
      
   > I recently activated gcc -Wsign-conversion option on a codebase and   
   > received a lot of warnings. I started to fix them, usually expliciting   
   > casting. Is it the way or is it better to avoid the warning from the   
   > beginning, choosing the right signed or unsigned type?   
      
   Here's the description of -Wconversion :   
      
   ‘-Wsign-conversion’   
    Warn for implicit conversions that may change the sign of an   
    integer value, like assigning a signed integer expression to an   
    unsigned integer variable. An explicit cast silences the warning.   
    In C, this option is enabled also by ‘-Wconversion’.   
      
   If you're converting between different types, it's often (but by no   
   means always) best to pick one type and use it consistently. I'm   
   suspicious of most casts; if I need a conversion, I find that C's   
   implicit conversions usually do the right thing.   
      
   --   
   Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com   
   void Void(void) { Void(); } /* The recursive call of the void */   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|