Forums before death by AOL, social media and spammers... "We can't have nice things"
|    alt.os.linux.mint    |    Looks pretty on the outside, thats it!    |    30,566 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 29,968 of 30,566    |
|    Paul to Axel    |
|    Re: LM file transfer/copy issues    |
|    20 Dec 25 05:08:17    |
      XPost: aus.computers       From: nospam@needed.invalid              On Sat, 12/20/2025 12:03 AM, Axel wrote:       > Lawrence D’Oliveiro wrote:       >> On Sat, 20 Dec 2025 11:42:19 +1100, Axel wrote:       >>       >>> Lawrence D’Oliveiro wrote:       >>>> On Sat, 20 Dec 2025 09:45:57 +1100, Axel wrote:       >>>>       >>>>> Lawrence D’Oliveiro wrote:       >>>>>> On Fri, 19 Dec 2025 07:32:04 -0500, Paul wrote:       >>>>>>       >>>>>>> That could be a bad SATA cable (or less likely, a bad SATA port       >>>>>>> on the motherboard).       >>>>>>>       >>>>>> File corruption would have been picked up by an rsync       >>>>>> verification pass.       >>>>>>       >>>>> I did get "copied with errors" messages at times, so then I would       >>>>> copy each folder or file within the folder one by one and that       >>>>> fixed it       >>>> Did you verify the copies afterwards?       >>> yes. by byte count       >> That’s pretty useless. No hashes?       >       > ???       >              Let us make two files              AAAAAABBBBCC              AAAAAABBBBCD              They both have the same byte count.              Now, do               sha256sum file1        sha256sum file2              and the checksums are entirely different. This is also termed "using hashes".              It's why hashdeep was invented. Hashdeep can generate checksums       for all the files in a source tree, then be used to audit       the same files in a destination tree.               sudo apt install hashdeep               cd /home/felix               hashdeep -c md5 -j0 -r Downloads > /tmp/audit.txt # Source tree is       /home/felix/Downloads        # It has our Golden       Files.               # The path value might be relative or absolute, and the reason        # I am using the crafty "cd" values is to be able to audit a        # relative path thing for identical contents. both recursive -r        # point to the same "directory name".               cd /media/mint/WDBLUE # The copied files we hope are the same.        # This is the destination we wish to audit for       corruption.        # The destination is our potentially unreliable       copy as        # /media/mint/WDBLUE/Downloads we did with our       rsync.               hashdeep -c md5 -j0 -k /tmp/audit.txt -a -v -v -r Downloads >       /tmp/audit-out.txt              The "md5" is the fastest hash supported by hashdeep.       The -j0 means "run the audit on a single thread as this is a hard drive        and we really want the file list to be in predictable order".       The -k specifies an audit file to compare against.       The -a is "audit mode" and it expects -k to identify the audit file to use.       The double verbose makes the output verbose       The -r is for recursive descent below the Downloads tree.       The audit-out.txt should identify destination files with a problem.              That's the basic idea, but you can easily "fall into a hole"       while using hashdeep, and it requires a good deal of hand holding.       (I use this on both Windows and Linux.) You should open both "audit.txt"       and "audit-out.txt" with a text editor and make sure the right things       happened.              There are more utilities than this, for comparing file trees.       "Tripwire" would be an example of an old one.               Paul              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca