I have a massive ZFS array on my fileserver. Whenever a disk reports bad sectors to smartmon, I order a replacement, and I shelve the failing one.
And by "shelving the failing one", I mean that I give it a low-level format if applicable, or a destructive badblocks run to possible claim spare sectors to replace the bad ones, then use it to dump my DVDs (and lately BluRays) on, so that I can use it with my HTPC and bring it with me when going to my friends to watch movies. It's just a really easy and portable way to watch movies with XBMC. I have the stuff on pressed discs already, so I'm not dependent on their reliance, and the dying drive just gets a hospice life serving as quick-access media storage. Keeping in mind Google's reports that drives are 39x more likely to die within 60 days after their first SMART error, I'm expanding that period by the fact that these drives mostly remain on their shelves and are only plugged into the SATA bay once or twice every year.
I'm just saying this to make clear that I'm not confused about these drives dying, and I'm not looking to elongate their lives ;)
So. Sometimes these drives, after a badblocks run, simply claim fresh sectors from the spare pool, but sometimes there aren't any left, and I face the fact that there are bad sectors in my FS. That's not a problem if you use one of a set of linux filesystems, as mkfs.* often takes a badblocks list as input. But seeing as I sometimes bring a drive or two to my girlfriend's (Mac) or one of my friends (usually Windows), I've decided to use NTFS for these things. Up untill now, when a drive had unrelocatable bad sectors, I've just written data to it, re-read it, and files that were bad were put in a "BAD_SECTOR_FILES" folder on the drive.
Sure, it works, but it would be really nice to be able to just mark those sectors bad instead. It's a lot of hassle the other way around.
So I read some posts, of which most quickly switch subject to the often accurate one of "replace your drive!", and some suggest spinrite, but really, I don't see why I should pay that much money for such a trivial task.
The alternative is to use ext3, but I'd like to hear if someone knows how I can feed badblocks output to mkfs.ntfs, so that the bad blocks aren't used. Or if there are other tools (I could use Windows in a VM) that do the same. I'm confused about chkdsk, it seems the bad sectors thing is FAT only?
I've been using Knoppix "Live CD" 6.2 and partimage 0.6.7 to back up and restore my Microsoft Windows XP system volumes on various computers. However, partimage seems to be unwilling to back up one of these NTFS volumes which has bad sectors, some unreadable data. It hits that and stops. But this appears to happen at the same place when I have already used Windows to find and mark and, I assume, remove from use, the bad sectors. Hmm. I thought they'd be ignored. It appears I thought wrong.
If so, which of several other Linux-based or other partition backup tools may be suitable for the task - to ignore or tolerate bad sectors? The main goal is to be able to update the volume subsequently in a way that may be a terrible mistake, and in that case to restore the previous version. Sometime not too far in the future, I suppose I have to think about replacing the disk.
I run Linux Slackware 13.1. I have a hard drive with a ext2 filesystem, and I would like to mark the filesystem as Clean without running FSCK on it. I think I can do that with the debugfs command, but in the help there seem to be only a command to mark the filesystem as Dirty?? Is there a way to manually mark a ext2/ext3/ext4 filesystem as Clean?
I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
I'm currently using Fedora 12 as seen in the subject, and I'm fairly new to it, but recently I've had a problem with my HDD. The problem is bad sectors and I've read up on how they occur but not many placessearched actually explains how to deal with this. When I start up my laptop (Acer 5610z) I get a SMART error saying "predicted disk failure, please back up data and replace drive." Along those lines, so I got curious and used Disk Analyzer this roughly what it says:
I have a 230GB hard drive wich I don't know it's name.I have a 207GB windows vista partition and the rest of it is for linux (Ubuntu).Today I decided giving it all space to Ubuntu Linux ,but didn't want to lose all my data from the windows partition.I thought that by deleting all things except the folder with my data and leaving enough space to shrink and make enough room for another partition to put my data folder.The logic is that i could then format that partition wich previously was windows and use it all for ubuntu without losing data.After having ubuntu installed i could copy my data folder to /home and then delete the previous partition and make /home bigger.The problem is that after i freed the space,when using Gparted to shrink it says that the partition has bad sectors or the filesystem has problems and so it can't do some operations.
What could have went wrong?It told me to do chkdisk but as i deleted all the windows files and i can't boot into it anymore.I used the vista dvd to do that.I rebooted 2 times as it says and after that when trying again nothing changed.I tried to use ntfsresize with the --bad-sectors argument and also the -f argument but it's useless.At the end it says it won't do anything until the ntfs filesystem get repaired.Or it says it is too risky to continueIs there any way i could do some superforce command to resize it without losing data?Please don't tell me to put it on an external storage cause i have like 70GB of datas to save...no i don't have an external hardrive
I am wondering if anyone knows how to enable NTFS compression using Paragon NTFS 8.1 Enterprise?
The Professional version comes with a utility mkntfs which allows you to set compression as default for all files, but the Enterprise version is apparently meant to be 'fully featured' and support compression, so how do I enable compression on a drive/folder/file?
To make a full backup I run a live Knoppix DVD and clone the computer's HDD to an external HDD using the dd command. Is there a possible problem with the source being copied onto bad sectors on the destination disk? If so is there a way to prevent this from happening? A typical dd command I use looks like: dd if=/dev/sda of=/dev/sdb bs=4096 conv=notrunc,noerror. Is this the recommended command for cloning to a disk of equal size?
I have checked the yum.log file and the last time an update was performed was Jan 6, 2011. I am the only person who administers this server and I do it remotely via SSH. No one has GUI access to this. At a minimum, the kernel version is older than the latest release. This is the first time since I brought this server online in 2009 that a monthly yum update didn't produce at least a dozen package updates. I've tried disabling the Priorities plugin (as well as rearranging priorities) and I get the same result: no updates available. For the record, I did spend quite a bit of time Googling this problem and doing a forum search here. I found nothing applicable to my particular situation. Here's the output of yum update:
[root@copeland2 log]# yum update Loaded plugins: fastestmirror, priorities Loading mirror speeds from cached hostfile * addons: mirror.5ninesolutions.com * base: mirrors.easynews.com * extras: mirrors.usc.edu * rpmforge: ftp-stud.fht-esslingen.de * updates: centos.mirror.facebook.net 0 packages excluded due to repository priority protections Setting up Update Process No Packages marked for Update
I tried ntfs and ntfs-3g but the result is the same I can mount root but I would like to be able to mount as a user. When I try to mount as a user I get
Unprivileged user can not mount NTFS block devices using the external FUSE library. Either mount the volume as root, or rebuild NTFS-3G with integrated FUSE support and make it setuid root. Please see more information at [URL] Before installing ntfs-3g I was able to mount as a user but there was no rw permission. Any way to mount an ntfs partition as a user without suid as the message said?
I recently tried Fedora on my laptop (previously Debian; I was bored one day) and gnome-disk-utility (palimpsest) warned me that my hard drive had numerous bad sectors. I re-installed Debian to find that this software was installed before so why had it not warned me?
When I load the disk utility, it says SMART is not available. I've got smartmontools installed, I can run a self-test with smartctl but I don't think this shows bad sectors. I've tried starting smartd on startup but the disk utility never changes from "SMART is not available". It is possible for it to work with this hardware as it works in Fedora on this laptop; any ideas?
My issue is with linux routing tables using iproute2, coupled with the iptables MARK target. When I create a rule to lookup a table with iproute2, and the routing table routes an address as type unreachable (or blackhole, or prohibit), if a higher priority rule does a lookup to another table that routes the address as type unicast but that higher priority rule also matches on a fwmark, the packet to that address is never generated locally to even go through iptables packet filtering/mangling in order to mark it, because the lower priority rule that doesn't match on a fwmark says it's unreachable. For example, I have 2 rules installed with ip:
10: from all fwmark 0x1000 lookup routeit 20: from all lookup unreach ip route list table routeit
Now, in the packet filter, I have an iptables rule to mark packets to destination 10.0.0.5 with 0x1000 in the mangle table and OUTPUT chain. When I generate a packet locally to 10.0.0.5, all programs get ENETUNREACH (tested with strace). However, if I take out the route entry that 10.0.0.0/8 is unreachable, it all works fine and the routes in the routeit table get applied to marked packets (I know because my default gateway would not be 18.104.22.168, but wireshark shows packets being sent to the MAC address of 22.214.171.124).
The best I can surmise is that when generating a packet locally, the kernel tests the routing tables in priority order but without any mark to see if it is unreachable/blackhole/prohibit, and doesn't even bother generating the packet and traversing iptables rules to see if it would eventually be marked and thus routed somewhere. Then I assume after that step, it traverses iptables rules, then traverses the routing tables again to find a route. So is there any way around this behavior besides adding fake routes to the routing table (e.g. routing 10.0.0.5 to dev lo in the unreach table in this example)?
Im new to the forum and to Ubuntu. I have a HP 'tablet' laptop which has no operating system installed and I was hoping to install Ubuntu. I downloaded ubuntu and the software to make it possible to boot from usb. I started the laptop up selected the option to boot from the usb. But now it says "marking TSC unstable due to TSC halts in idle" and then on the next line it says "switching to clocksource hpet" and it has been like this for hours. The laptop specs are: Intel Core 2 CPU T5600 @ 1833MHz with 1GB of RAM.
One inconvenience I face now, though, is that I cannot tell if I have already forwarded certain messages or not, because the message is not automatically tagged as forwarded. how to set it up, so it would indicate in the list that the message has been forwarded?
I have some errors on my drive and I fear it may be faulty. However there are a few things I would like to try before replacing it through the manufacture or buying a new drive of my own seeing as this is a brand new computer.
Here is my computer and drive: Acer 5251-1513 Laptop Toshiba MK2565GSX Running Fedora 13...now
Here is what is going on. Tried several version of Ubuntu 10.4 (studio, 64bit, 32 bit) and was having many errors during startup and having to press F to fix. Then I lost something with Gnome and the GUI would not function, and I did not know how to replace it. Tried a few other distros but could not get them to work (mostly on my part I am sure.) Then after some forum talk, thought it might just be Ubuntu unable to handle my drive. Now on Fedora 13 and a warning comes up every time I startup. "Disk has many bad sectors"
In the disk utility under the SMART Data it has 2 of the following warnings: 5- Reallocated Sector count..with a value of 72 sectors 197 Current Pending Sector count...with a value of 35 sectors Total Bad Sectors 108. The next day that went up to 110
I have used Fsck several times through a live CD, but the problem persists. Trying to understand bad blocks and how to write them to a file?
I have an Acer tiny desktop using laptop components and I want to replace its small laptop hdd running Vista with a Kingston SSDNow V Series Boot Drive 30GB and install Ubuntu, since it will support TRIM. I am aware of the current issues on some new hard drives with 512 vs. 4k sector sizes and the necessity to align sectors for those drives. And I know I've seen some posts or discussion of aligning sectors for SSD's.
I'll be doing more searching for info on this, but my previous searches on the 4K sector alignment issue for the new WD hdd's on linux were confusing. Does anyone have definitive information on the necessity of aligning 4k sectors on current Linux kernels, or on whether aligning sectors is necessary for SSD's?
I have some bad sectors on the primary HD and want to move everything to a new HD. What would be the steps to do this. I have 5 running websites on the server. The HD are the same make and model. My current HD setup is Code: 1 Linux LVM 232.65 GB 1 30370 LVM VG server1 2 Extended 243.17 MB 30371 30401 5 Linux 243.17 MB 30371 30401
I recently got a bad virus that wouldnt let me reinstall Windows so I figured I would install Ubuntu and give it a go, but now it says my hard drive has "many bad sectors" a quick Google search shows many ways to fix this in Windows, but how do I do it in Ubuntu?Easily since Im just getting the hang of things.
Since a few days all of my computers (3) running Ubuntu 9.10 report on startup that my external drive has "lots of bad sectors".I have checked this disk on Windows XP with chkdsk and with the SeaTools diagnostic tool dowloaded from Seagate. Both report no problems.Does anyone else suspect these Ubuntu "bad sector" warnings are unreliable?
I used to have windows xp. But recently I started having a message at startup telling me that my disk might be failing and I should run the test (which would crash and reboot th laptop). Then after a while windows wont even start. So I tried to use ubuntu netbook live from a usb and it is working fine ! I can even access all my data on the hard disk, although it is telling me that the hard disk is failing and that it has 1024 bad sectors. I have only one hard disk and one partition (120 GB). Can I just install ubuntu and somehow block the bad sectors? (I don't want windows anymore) and is there any way I could keep my old data on the hard disk without backing it up (it is not really important btw).
I've just added a second disk to one of my computers. It is a 500GB SATA. It is the second drive according to the BIOS. Fedora calls it /dev/sdb. So far so good. This box is running Fedora 13 final. Never any problems until the addition of the new disk. Palimpsest says that this disk has a LOT of bad sectors. This disk is a storage drive. I want to address the problem but don't know what to do first. My thought is to rsync all the data to my external 250GB disk bedore I do anything else but I'm mot sure if I should just yet. Maybe I should run some diagnostics on the drive? If so, what? How about the tools Disk Utility offers? Should I use the Smart Utilities? What other Linux tools are available and are they reliable? Maybe I should install XP on the main disk and use Windows' disk tools? If I should lose all data it wouldn't be the end of the world but I'm not sure how "in sync" the 2 storage drives actually are.
I did set up a new server (with mail, apache and lots of other stuff) and was not aware that the new harddisks of type WD10EARS-00Y5B1 were using 4KB sectors internally. The problem became visible after going live because of the lousy performance of the harddisk drives.
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000efdd0
I already did re-partition /dev/sdb with parted-2.2 (compiled from source) and set the alignment starting with sector 64 (instead of the default 63) making sure that the sector count of every partition divides by 8. Now comes the tricky part: I must partition /dev/sda as well. I can backup everything to /dev/sdb. What is the recommended course of action here? Make sdb active and boot it? That would give me all the time I need to deal with sda. Then reverse again. Any backup of /dev/sda will be outdated soon (running system)Rescue DVD only offers parted-1.9.0