From: cr88192@gmail.com   
      
   On 10/20/2025 11:43 AM, Michael S wrote:   
   > On Mon, 20 Oct 2025 17:03:58 +0200   
   > pozz wrote:   
   >   
   >> After many years programming in C language, I'm always unsure if it   
   >> is safer to use signed int or unsigned int.   
   >>   
   >> Of course there are situations where signed or unsigned is clearly   
   >> better. For example, if the values could assume negative values,   
   >> signed int is the only solution. If you are manipulating single bits   
   >> (&, |, ^, <<, >>), unsigned ints are your friends.   
   >>   
   >> What about other situations? For example, what do you use for the "i"   
   >> loop variable?   
   >>   
   >   
   > I'd just point out that small negative numbers are FAR more common than   
   > numbers in range [2**31..2**32-1].   
   > Now, make your own conclusion.   
   >   
      
   Yeah, the distribution is lopsided, but I had usually noted that for   
   numeric values of n bits, by the time n reaches 9 or 10, one becomes   
   more likely to encounter a negative value than one in the range of n+1.   
      
   Whereas, below this point, one is more likely to encounter a positive   
   value larger than n, than to encounter a negative value.   
      
   So:   
    Positive values between 0 and 511: Very common;   
    Negative values:   
    Less common than values under +512   
    More common than values over 1024.   
      
   There is typically a large cluster of small positive numbers near 0,   
   with a very steep falloff as numbers get larger.   
    So, for example:   
    1 is most common;   
    2 is less common than 1;   
    3 is less common than 2;   
    ...   
    Like, where the probability of seeing N is seemingly 1/(N+1).   
      
   Outside of this main cluster, which largely falls to "very little" by   
   512, there are a few big spikes up near a few locations:   
    n = 2^15 and 2^16 (Best covered by a 17-bit sign-extended value)   
    n = 2^31 and 2^32 (Best covered by a 33-bit sign-extended value)   
    n = 2^63   
      
   If expressing values as fixed-width binary fields, there is often sort   
   of a "no man's land" for values between 34 and 61 bits where one is   
   unlikely to find a whole lot of anything.   
      
   Contrast, between 18 and 30 bits, there are still a handful of values   
   spread across this range, usually in small counts (so, this space isn't   
   really as empty as the gap starting at 34 bits).   
      
      
   So, say, it is not all that useful to be able to represent a value   
   larger than 33 bits without going all the way to 64.   
      
   And, at this upper end, most of what one encounters tends to be things   
   like double-precision values and EIGHTCC style values.   
      
      
   And, statistically speaking, int32 is likely to hold the vast majority   
   of integer values one is likely to encounter.   
      
      
   A lot is likely to depend on what one is looking at (this is mostly for   
   a distribution of literal values in my compiler stats).   
      
      
   Ironically, because of the distribution, having things like some CPU   
   instructions with only 5 or 6 bit fields for integer immediate values   
   isn't totally useless.   
      
      
   >   
   >> I recently activated gcc -Wsign-conversion option on a codebase and   
   >> received a lot of warnings. I started to fix them, usually   
   >> expliciting casting. Is it the way or is it better to avoid the   
   >> warning from the beginning, choosing the right signed or unsigned   
   >> type?   
   >>   
   >>   
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|