home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.os.linux.ubuntu      I preferred Xubuntu, seemed a bit faster      134,474 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 134,141 of 134,474   
   Paul to Jim   
   Re: New reason for Disk Management ("Shr   
   25 Jan 25 23:14:23   
   
   XPost: alt.comp.os.windows-10   
   From: nospam@needed.invalid   
      
   On Sat, 1/25/2025 9:06 PM, Jim wrote:   
   > On 25/01/2025 15:55, Alan K. wrote:   
   >> More reason to keep a spare Win7 machine /disk around huh.   
   >>   
   >   
   > What is Linux version of disk-check? In windows it takes hours as stated   
   > above but what about Ubuntu?? What command to use to check this?   
   >   
   > I have a 2 TB disk that is slow and showing signs of wear & tear so I   
   > tried to clone it on a new SSD 1TB disk. Acronis states that it is   
   > possible to clone on to a smaller disk as long as the actual data is not   
   > more than the size of the target disk. The source has about 350GB of   
   > data (out of 2TB disk size) but this can't be cloned because the disk   
   > has bad clusters according to Acronis. So how do I correct this in Linux   
   > ubuntu?   
   >   
   > x-posted to ubuntu as well just in case they can respond quickly.   
      
   Unfortunately, you CAN actually clone $BADCLUS, assuming the underlying   
   sectors do not blow CRC errors on a read. This is one of the problems with   
   the $BADCLUS concept, is it marks off storage areas, yet during cloning,   
   few if any utilities handle $BADCLUS properly. Marking off areas of the   
   new 1TB disk as unusable (when there is nothing wrong with them), is   
   what would happen if cloning under any normal conditions.   
      
   You need *some* kind of utility, that maps what you know about the bad   
   clusters,   
   to the existing NTFS $MFT file entries, to figure out what got damaged   
   when the $BADCLUS were mapped out in the first place. You're cloning a   
   disk with (virtual and physical) damage to one or more files on the disk.   
   You have to determine which files got hit.   
      
   Previously (earlier posting some time ago), I had a disk with four CRC   
   errors but no $BADCLUS. The file system had not discovered anything   
   about the files at that point. I was careful to not let Windows see what   
   was going on, and part of my sequences there were done from the Linux side.   
   Of the four CRC errors on the hard drive, two errors were in OS files on the   
   disk,   
   two errors were in white space (so their handling is less of a problem later   
   on).   
   Zero errors were in my Documents folder.   
      
   My first step there, was to acquire replacement files for the two OS ones.   
   And write those to the disk (in a different location).   
      
   The four errors were "reallocated" by the disk drive, by attempting writes   
   to the sectors. The disk uses a spare, and the disk keeps a table of   
   which sector is mapped out, and which spare sector is currently taking its   
   place. The cache DRAM chip on the hard drive, keeps that map while the   
   disk drive is running. That's because they don't want the disk constantly   
   rattling while looking up the map off the platter, for each sector that   
   needs mapping info.   
      
   Your disk, it sounds like you have run out of spare sectors on the   
   disk at the physical level. Using "smartctl" from "smartmontools" package,   
   can give you some information on physical disk health.   
      
   You can use "ddrescue" from "gddrescue" package, to copy a defective 2TB   
   disk drive, to a new 2TB drive. That will recover all the data which is   
   readable. Using the "rescue.txt" file from the repeated copy attempts,   
   gives you a final summary of what sectors could not be copied. Then   
   you have to do your best on the "new disk" to manage the file content   
   there. If the errors happen to fall in $BADCLUS, then you won't be removing   
   $BADCLUS until you've done a    CHKDSK /b K:  from Windows. And even a Windows   
   7   
   installer DVD is sufficient to do that. Maybe you could do that from a   
   Hirens disc, but I'm not a Hirens user so I don't know what is on that disc.   
   Some home-made discs that are based on WinPE (Windows Preinstall Environment)   
   are available out there, which allow some Windows utilities commands to   
   run, without "having a Windows PC" to do them. They can be run from the   
   live media supported by the WinPE files. That's why a Win7 installer DVD   
   works for this -- it's based on WinPE, there is a cmd.exe shell onboard   
   and CHKDSK is there to be used.   
      
   Letting your disk degrade to the point that stuff ends up in #BADCLUS,   
   that's an avoidable "own-goal". If you're to operate a computer, check   
   your hardware once in a while, and that will help prevent "a very   
   complicated repair/restoration recipe" from being needed. At the very least,   
   you'll be weighing the value of the different paths available to you.   
   Try and repair a flat tire, by drilling more holes in the flat tire   
   (sick disks are not going to "work with you" for a successful conclusion).   
   Additional errors will crop up, if you keep writing to the sick disk.   
      
       sudo smartctl -a /dev/sda   # What is my disk health ? "Tell me a story"   
      
   As a result of that, I would not use Acronis (and it says it's not   
   going to accept the challenge anyway). I would start with a new 2TB   
   HDD in hand, and use the gddrescue package to move the data over.   
   Then, you will do the maths to determine what file(s) are damaged.   
   Since the new disk will have no CRC errors, every flat tire patch   
   applied to the new disk is going to work. You could have lost   
   some user data files (they could have a $BADCLUS hole punched in them).   
   If some OS files are damaged on a Windows OS partition, there is   
   DISM and SFC as an option.   
      
      sudo apt install gddrescue   # perhaps this installs ddrescue in /sbin or   
   /usr/bin ???   
      
      sudo ddrescue -f -n /dev/sda /dev/sdb /root/rescue.log   # Live media,   
   first pass copy, old to new   
                                                               # Keep the   
   rescue.log on a USB stick. Don't lose it!   
      
      # Examine the LOG file for details. A large log file means   
      # there are many many CRC errors.   
      
      gedit /root/rescue.log                                   # text file, terse   
   format, techno-babble   
      
      # Now, the second pass reads the log, and concentrates only on the   
      # not-yet-captured sectors. After a couple of these runs, there will   
      # be no further progress and the process stops. "Content with damage" now   
   on /dev/sdb .   
      
      sudo ddrescue -d -f -r3 /dev/sda /dev/sdb /root/rescue.log   
      
   But this is not a trivial exercise, and I doubt I would have   
   the stamina to finish one involving this much damage. As it was,   
   I probably sat looking at a screen for 20 hours, just tuning   
   the fucking sequence to make this one problem go away :-)   
   That's what I mean by stamina. You have to be a very determined   
   individual to finish one of these.   
      
   This is a brief overview of the sequence. Have in hand, your   
   2TB bad drive, a brand new 2TB good drive. I generally recommend   
   to people they have two empty drives handy for recovery work, as one   
   drive may contain your "golden" recovered copy, while a second   
      
   [continued in next message]   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca