Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.lang.c    |    Meh, in C you gotta define EVERYTHING    |    243,242 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 243,235 of 243,242    |
|    David Brown to James Kuyper    |
|    Re: srand(0)    |
|    21 Feb 26 11:09:12    |
      XPost: sci.math.num-analysis       From: david.brown@hesbynett.no              On 20/02/2026 22:01, James Kuyper wrote:       > On 2026-02-19 14:47, David Brown wrote:       > ...       >> How likely is it that someone would guess a formula that happened to       >> generate the decimal digits of pi, without more knowledge than a part       >> of the sequence? I don't believe it is possible to quantify such a       >> probability, but I would expect it to be very low.       >       > I'm thinking of the kind of software that looks for patterns in       > something, such as compression utilities. A compression utility       > basically converts a long string of numbers into a much shorter string       > that can be expanded by the decompression utility to recover the       > original pattern. If you look at the algorithms such code uses, you       > realize that they do not attempt to recreate the process that originally       > generated the long string, they just, in effect, characterize the       > resulting sequence.       >              One of the characteristics of a good PRNG is that its unpredictability       and its statistical properties (typically they aim to be like white       noise, but other distributions are sometimes useful) make them       uncompressible with generic algorithms. Since the PRNG's sequence is       generated from an algorithm, it is of course always possible to       re-create that algorithm as a "compressed" form of the sequence. But       generic algorithms will never manage it.              Indeed, you can /define/ a random sequence as a series x(i) such that       for any given compression algorithm Z, there is an integer n such that       the Z-compressed version of x is always bigger than the original x for a       sequence length n or more.              And for any given compression algorithm Z, you can find a PRNG algorithm       that Z cannot compress (after a possible initial segment). I haven't       dug through the details, but I am confident that you could show this       with a diagonalisation algorithm over Turing machines, or something akin       to the halting problem proofs.              Or you can just try it yourself:              $ dd if=/dev/urandom of=rand bs=1M count=1       $ cp rand rand1       $ cp rand rand2       $ gzip rand1       $ bzip2 rand2       $ ls -l       -rw-rw-r-- 1 david david 1048576 Feb 20 09:12 rand       -rw-rw-r-- 1 david david 1048760 Feb 20 09:12 rand1.gz       -rw-rw-r-- 1 david david 1053414 Feb 20 09:12 rand2.bz2              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca