I’m running badblocks in parallel on 2 identical hard drives. The problem is that /dev/sda has already finished without error the tests but /dev/sdb it at 363353920/488386583 (about 75%) but didn’t report any error.
Is this a sign of hdd failure?
The tests are running from an emergency console and the server is dual E5520. Also the both HDD are from RAID1 array so the reads and writes were presumably identical.
Thanks for your answers.
It might be. Or might not.
It’s possible your disk is failing, but it can automatically correct problems. Also, problems with cables, or controller (on disk side or on computer side) can cause that. Small number of bad blocks do not necessarily mean disk failure, even though then probability is higher.
It may also be that for some reason another disk was priorized higher (or another badblocks scan was priorized higher). If it’s test ran by hardware controller, then it might not run both disks at the same speed.
If it’s mission critical system, or if there is warranty service available, you should change disk anyway.
The bad blocks you are getting may just be the return from the console not showing all the directive HDD space, because you are in RAID1 configuration the disk may take up some space which may return as bad blocks if you IDE/ATA array hardware is 3rd party such as a PCI card etc.
If so the bad blocks will be the console not reading the data through the card or the data only being able to display a certain disk space on your OS. This could well be the start of a disk error but i would put it down for ATA controllers before you go buying new drives. Your HDD may also become crammed with cache and cfg files for the RAID card.
Please advise on your hardware status.