From: cr88192@gmail.com   
      
   On 12/24/2025 5:22 AM, BGB wrote:   
   > On 12/23/2025 11:54 AM, Waldek Hebisch wrote:   
   >> Michael S wrote:   
   >>> On Mon, 22 Dec 2025 18:41:10 +0100   
   >>> Janis Papanagnou wrote:   
   >>>   
   >>>> On 2025-12-22 18:13, James Kuyper wrote:   
   >>>>> On 2025-12-22 07:18, Janis Papanagnou wrote:   
   >>>>>> On 2025-12-22 12:44, James Kuyper wrote:   
   >>>>>>> On 2025-12-22 03:48, Michael Sanders wrote:   
   >>>>>>>> Is it incorrect to use 0 (zero) to seed srand()?   
   >>>>>>>>   
   >>>>>>>> int seed = (argc >= 2 && strlen(argv[1]) == 9)   
   >>>>>>>> ? atoi(argv[1])   
   >>>>>>>> : (int)(time(NULL) % 900000000 + 100000000);   
   >>>>>>>>   
   >>>>>>>> srand(seed);   
   >>>>>>>   
   >>>>>>> No, why whould you think so?   
   >>>>>>   
   >>>>>> There's number sequence generators that produce 0 sequences if   
   >>>>>> seeded with 0. ...   
   >>>>>   
   >>>>> The details of how the seed affects the random number sequence are   
   >>>>> unspecified by the standard. I personally would consider a   
   >>>>> pseudo-random number generator to be quite defective if there were   
   >>>>> any seed that produced a constant output.   
   >>>>   
   >>>> I wouldn't have mentioned that if there weren't a whole class of   
   >>>> such functions that expose exactly that behavior by design. Have   
   >>>> a look for PN-(Pseudo Noise-)generators and LFSR (Linear Feedback   
   >>>> Shift Registers). These have been defined to produce random noise   
   >>>> (bit pattern with good statistical distribution). With sophisticated   
   >>>> generator polynomials they produce also sequences of maximum period;   
   >>>> say, for N=31 a non-repeating sequence of length 2^N - 1. The one   
   >>>> element that is missing from the sequence is the 0 (that reproduces   
   >>>> itself).   
   >>>>   
   >>>> Technically you pick some bit-values from fixed positions (depending   
   >>>> on the generator polynomial) of the register and xor the bits to shift   
   >>>> the result into the register. Here's ad hoc an example...   
   >>>>   
   >>>> #include    
   >>>> #include    
   >>>>   
   >>>> int main ()   
   >>>> {   
   >>>> uint32_t init = 0x00000038;   
   >>>> uint32_t reg = init;   
   >>>> uint32_t new_bit;   
   >>>> int count = 0;   
   >>>> do {   
   >>>> new_bit = ((reg >> 2) + (reg >> 4) + (reg >> 6) + (reg   
   >>   
   >>>> 30)) & 0x1;   
   >>>> reg <<= 1;   
   >>>> reg |= new_bit;   
   >>>> reg &= 0x7fffffff;   
   >>>> count++;   
   >>>> } while (reg != init);   
   >>>> printf ("period: %d\n", count);   
   >>>> }   
   >>>>   
   >>>>   
   >>>> Janis   
   >>>>   
   >>>>>> [...]   
   >>>>   
   >>>   
   >>> Pay attention that C Standard only requires for the same seed to always   
   >>> produces the same sequence. There is no requirement that different   
   >>> seeds have to produce different sequences.   
   >>> So, for generator in your example, implementation like below would be   
   >>> fully legal. Personally, I wouldn't even consider it as particularly   
   >>> poor quality:   
   >>>   
   >>> void srand(unsigned seed ) { init = seed | 1;}   
   >>>   
   >>> [O.T.]   
   >>> In practice, using LFSR for rand() is not particularly bright idea for   
   >>> different reason: LFSR is a reasonably good PRNG for a single bit, but   
   >>> not when you want to generate a group of 31 pseudo-random bits. In   
   >>> order to get 31 new bits, without predictable repetitions from the   
   >>> previous value, you would have to do 31 steps. That's slow! The process   
   >>> can be accelerate by generation of several bits at time via look up   
   >>> tables, but in order to get decent speed the table has to be rater big   
   >>> and using big tables in standard library is bad sportsmanship.   
   >>>   
   >>> It seems that overwhelming majority C RTLs use Linear Congruential   
   >>> Generators, probably because for Stanadard library compactness of both   
   >>> code and data is considered more important than very high speed (not   
   >>> that on modern HW LCGs are slow) or superior random properties of   
   >>> Mersenne Twisters.   
   >>   
   >> There is a paper "PCG: A Family of Simple Fast Space-Efficient   
   >> Statistically Good Algorithms for Random Number Generation"   
   >> by M. O’Neill where she gives a family of algorithms and runs   
   >> several statistical tests against known algorithms. Mersenne   
   >> Twister does not look good in tests. If you have enough (128) bits   
   >> LCGs do pass tests. A bunch of generators with 64-bit state also   
   >> passes tests. So the only reason to prefer Mersenne Twister is   
   >> that it is implemented in available libraries. Otherwise it is   
   >> not so good, have large state and needs more execution time   
   >> than alternatives.   
   >>   
   >   
   > A lot can depend on what one wants as well...   
   >   
   > Fast/Simple:   
   > seed=seed*65521+17;   
   > val=(seed>>16)&32767;   
   >   
   > At first glance, this approach seems random enough, but these type of   
   > RNGs have a type of repeating pattern that can become obvious, say, if   
   > using them to generate random noise images.   
   >   
      
   Turns out more true of the one below than the one above, though both are   
   still pretty weak.   
      
      
   Usually, one very quick/dirty way to test a RNG is to generate an image   
   filled with random numbers:   
   If it looks like white noise, the RNG works OK;   
   If the image has repeating patterns or any other sort of obvious   
   structure, the RNG is broken.   
      
   Not a great test, but for many practical use cases may be good enough.   
      
   The above example shows evidence of "structural" features, implying the   
   RNG may not be particularly strong.   
      
      
   >   
   > Or, can also work OK (also fast/simple):   
   > seed=(seed<<1)^(~(seed>>7));   
   > val=(seed>>8)&32767;   
   >   
      
   Got around to testing, this particular one was kinda broken (breaks down   
   into repeating patterns).   
      
      
   A tweak, say:   
    seed=(seed<<1)^(~(seed>>7))^(seed>>21);   
    val=(seed>>8)&32767;   
   Is much better...   
      
   Also seems to work:   
    seed=(seed<<3)^(~(seed>>19))^(seed>>13);   
   Etc.   
      
      
      
   Seems the simple pattern with one XOR may be too little, and at least   
   two XORs may be needed here. But, this approach is still fragile and   
   prone to break down into repeating patterns. Also seems to need a mix of   
   a prime and non-prime for some reason.   
      
      
   A lot may depend on on how quickly the random numbers need to be   
   generated vs the quality of the needed randomness (and whether or not   
   the target machine has a fast integer multiply).   
      
   Where, say, one down-side of more traditional "multiply 64-bit value by   
   constant" RNGs is that performance suffers badly in the absence of fast   
   64-bit multiply.   
      
   one other tradeoff being if the target machine is natively 32 or 64 bit.   
      
      
   In some cases, it may make sense to do a fast RNG with a simpler   
   algorithm, but then ever N numbers, reseed the fast RNG with a number   
   generated by a slower but more powerful RNG.   
      
   Where, say, fast RNG may be needed for things like genetic algorithms   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|