Hardware :: Possible To "stress Test" Hard Drive To Fail?
Nov 14, 2010
This is a general hardware question, but I trust the Linux community to be more clueful than those *other* OS users. I just picked up a new external hard drive. Back in the 8-bit era of the 1980's, there was a general consensus, that may have been an urban legend, that if a piece of hardware were faulty and was going to fail, then it would fail within the first $NUMBER hours of use, now $NUMBER varied from 100 to 500, depending who you talked to.
While the above *seems* to make sense, does it, indeed, make sense? Be that as it may, given that I do subject the new hard drive to big buckets of read/writes and it does not fail, does anyone have any opinion on whether this is any sort of a guarantee that the new hard drive is less likely to be faulty and that I can feel more secure of its ability to not melt into a pile of slag on my desk?
Just put an old desktop to work by installing Puppy on it. I have another HDD that I think didn't work very good, so I'm gonna plug it in and test, but I don't know what to use to stress it.
After turning off the overclocking (When I actually turned it off, I realized it didn't really make a noticeable difference) of my CPU and lowering the fans of my PC, I would like to stress test the GPU/CPU, and obviously also check the temps while doing so. The fans on my PC were previously so loud you couldn't even have a conversation on the phone while in the same room as the computer, and I just now realized how much I can actually lower them... But I don't want to lower them too much, obviously.
So, anyway, because of these reasons I would like to stress test my CPU/GPU while monitoring the temperatures. I'd need software for doing so, Linux or Windows doesn't really matter, I have both. Also, I need to know what minimum/maximum temperatures that are okay.
I notice a bunch of weird what appear to be hard drive related error messages on my Linux server:
May 16 19:07:38 ghost kernel: [ 3495.452698] ata3.00: configured for UDMA/33 May 16 19:07:38 ghost kernel: [ 3495.452706] ata3: EH complete May 16 19:07:40 ghost kernel: [ 3497.380640] ata3.00: configured for UDMA/33
[code]....
I don't know if this could be an indication that my hard drive(s) are about to fail. Can someone tell me if there's a way to test the drives or understand what's causing this error?
I have a seagate SATA hard drive that was running a mythtv distro. It had 3 partitions, EXT3, swap, and XFS. I started having I/O errors on boot and saw error messages on both the EXT3 and the XFS partitions. I also heard some clunking sounds on the drive when it was reading, so I thought hell, the drive is dead.
I have since replaced the drive and everything is back up and running on the replacement drive. I thought hell, the seagate drive is toast, but I just want to verify it with some sort of tool. I have the hard drive in a Vantec NexStar external hard drive case (SATA->USB) and found there was a tool called badblocks. Ran badblocks on it, which ran for 24ish hours and found no bad blocks. I also didn't notice any clunking sounds while it was running.
I ran Code: badblocks -n -v /dev/sdb Is badblocks a proper test to run on external hard drives or was I just wasting my time? Is there any way that I can really test it without removing it and connecting it with SATA to the motherboard?
I need to a software can test my apache server under high traffic.for example can simulate 1000 user request to my server and give me good statistic. I have found this product, but that is not free If any one know such as this program I will happy for inform that.
I notice a bunch of weird what appear to be hard drive related error messages on my Linux server:
Code: May 16 19:07:38 ghost kernel: [ 3495.452698] ata3.00: configured for UDMA/33 May 16 19:07:38 ghost kernel: [ 3495.452706] ata3: EH complete May 16 19:07:40 ghost kernel: [ 3497.380640] ata3.00: configured for UDMA/33 May 16 19:07:40 ghost kernel: [ 3497.380648] ata3: EH complete May 16 19:07:44 ghost kernel: [ 3501.732973] ata3.00: configured for UDMA/33
please use /proc/5073/oom_score_adj instead.
I don't know if this could be an indication that my hard drive(s) are about to fail. Can someone tell me if there's a way to test the drives or understand what's causing this error?
My original intention was to install a distribution of Ubuntu onto an external hard drive so i can use it on different computers. I first downloaded and burned a copy of Ubuntu 10.10 and booted my Acer laptop to it. I then plugged in my external hard drive and tried to install ubuntu onto it by partitioning the external hard drive. After I did that, I booted from the external hard drive on my laptop and it ran the new distribution i created. However, when I tried to boot it from a different computer it said something like "partition not found." So the next time I tried to install ubuntu onto the external hard drive with out partitioning it, using the entire drive. This is what started to cause problems.
Now when I start up my laptop without the external hard drive plugged in i get "error: no such device: xxxx..... grub rescue>. When I start it up with the hard drive plugged in a grub comes up with the new installation, my old ubuntu installation, and my old windows vista.
I got a dell inspiron 1501 laptop with a 80Gb sata drive what is the best solution to add data storage space for someone that love to have multiples operating systems at hand Note: I use mostly linux so I won't need to change my laptop for many years maybe ...
My parents bought a new hard drive for a laptop that I've owned for several years. It's much larger than the current one, so I plan on splitting it up to dual boot it with Ubuntu.I have no problem with partitioning a drive (I always keep a LiveCD handy), but my question is this: how can I go about moving the existing partition to the new drive? This is a laptop, so I can't simply plug the new drive into another slot.
Also, even if I manage to move it, will Windows still work on the new drive in a larger partition? I've had this laptop for quite a while, and I've lost the recovery discs that came with it a long time ago. I also have a lot of software without CDs to reinstall them with. This makes not reinstalling Windows a high priority.
Trying to install Fedora 12 using the 6 CDs. Trying to install on an older x86 box.Problem is that when detecting my hard drive, Fedora 12 recognizes it as a sda hard drive instead of hda hard drive. I have no SCSI connected to my computer what so ever. It's an old fashion PATA Western Digital hard drive.If I proceed with the install, Fedora 12 only installs 200MB of the OS from the first CD only. No options for additional software or anything.
I have a laptop with only 30GB storage and I want to install Lubuntu in virtual box but Lubuntu needs 5GB of storage space which i dont have. Could i use an external 160GB hard drive to act as the hard drive for the virtual machine without affecting the files that are already on the external hard drive
I bought a new 1.5TB SATA drive (WD15EARS), now I intend to thoroughly test it for bad sectors and other issues, before it will become part of a server raid array.
Are there any good testing tools out there to perform read/write compare operations allover the disk space?
I recently bought 320 GB Trancend external hard disk and working fine days back.Earlier i could copy from and to the hard disk with out any issue. I dont know what happened after that now i am not able to write any files in to the external hard disk. This is not NTFS formatted device. here is some of the out put from terminal.
Code: sundar@sundar-sundar:~$ fdisk -l Disk /dev/sda: 120.0 GB, 120034123776 bytes
I have been trying to install centos on my hp servers and when i get to partitions my hard drives the OS does not detect any harddrives. I have 4 scsi drives and i believe a intergrated smart array controller.
is there a way to write/unpack .qcow2 hard disk image directly to real hard drive in Linux?(I know it's possible to unpack .qcow2 to .raw and then dd to drive, but I'd like to skip .raw since its large)
I have a SATA drive that worked fine. Then I installed two more hard drives into my system. When these hard drives are installed, if I try to access the SATA drive in Linux, it will start lightly clicking and then the drive will become unavailable. If I power on the machine without the other two hard drives then it works fine. What could be causing this to happen? I don't think it's heat because the two hard drives are far away from the SATA drive.
Its basically an old SATA Hard Drive with a Windows XP partition I was trying to sell.When my computer does the BIOS checks, it doesn't pass the SMART test (but I can boot it anyway), although I can't boot Linux in any way with this Hard Disk connected (I even tried Live CD distros, like Parted Magic).I can boot the XP partition from inside the disk, although I guess its pretty close to not being able to. Is there any way to "fix" this Hard Drive?
The disk I obtained from a seller runs fine in live mode (no installation) on my Windows XP. I liked what I saw. However, when trying to run cd live on my Linux PC it won't run. Linux pc currently has Kubundu 9.1 installed. Previous to that I had Mint 8 installed, but again Fedora 12 cd would not run. After getting an initial Fedora startup screen, I next receive a bunch of text, ending with a text screen of about 30 lines with "OK" in green to right of each line. At bottom is blinking cursor. That ends my machine's live running of cd.
Obviously, if I can not get cd to run live on linux pc, I'm not going to be able to install. (I should add that ubuntu, kubuntu, mint 8 and pclinuxos all ran successfully live on machine and three of them were installed successfully.) Perhaps, Fedora is at war with Ubuntu et al.
I have been attempting to load the latest version of Shotwell only to find that it is not available on 10.04. Consequently I attempted to load 10.10 and found that it would not load from my DVD. Similarly 11.04 fell at the wayside. Tried downloading both OSs again and again but each time failure to load.
Having come across reference to test drive I thought that I would give it a go and followed the instructions on this. It appeared to download OK but then would not display the OS. Decided to remove but despite it appearing to be removed (from the information displayed in the terminal) it still is loitering on the menu.
Since that failure I think that it could be related I am now being continuously asked to authenticate the keyring, something that from my initial install I have never needed to use.
I am trying to move a whole bunch of files from one partition on one hard drive to the same partition on another hard drive. Can I mount the same partition (same name, different drives, i.e. /data on /dev/hda1 and /data on /dev/hdb1)and copy those files? Shutdown the server, take out /dev/hda1 and boot up with the new drive and it's /data contents.
i have a problem when i use test drive, it doesn't boot into the operating system. it just look like in dos when i login. what is wrong? shouldn't test drive let me see how its look when i use the live CD?