When i type the df command i see that /dev/hda1 as a filesytem that is mounted at '/'(root). Is /dev/hda1 a filesystem. I thought that it is a partition on my hard disk that contains the root file system.
I Tarred and GZipped most of the data on one 1Tb partition and stuck the archives on a second 1Tb partition on a separate disk. I then proceeded to format the first partition with NTFS (from Linux.) The only problem is that I completely forgot that I had a CD drive and formatted sdc1 instead of sdd1! I began doing a full NTFS format and after a minute or two I cancelled it and decided to do a quick format. I then realized my mistake. I managed to find a copy of the superblock and began trying to recover the disk. fsck -t ext3 recognized the partition as NTFS but I luckily didn't have fsck.ntfs installed so it didn't touch it. I managed to get it working with fsck.ext3 (with -b,-B and -y) fsck.ext3 didn't mind that it was an NTFS partition.
Roughly how long will this take? It's running from Knoppix within a virtual machine to a USB hard drive which is 100% full. Days? Being that for a few minutes I attempted a full format am I going to end up with a bunch of corrupted archives? If I do end up with file corruption can anyone recommend a way of recovering the data / sorting it out? Is it likely to be just a few old files that are corrupt (It's my understanding that filesystems like to keep files in the same area on the disk to minimize the amount of head travel.) This might just be wishful thinking but as the filesystem fills up will ext3 put the newer files towards the end of the disk? If so then I'm hoping that a full NTFS format starts at the beginning of the disk.
fsck.ext3: Unable to resolve 'UUID=theUUID' where "theUUID" (without the quotes) is the UUID
I believe this is caused by me trying to get lvm to use the external /boot because when I had unmounted the external /boot, it was creating a /boot in root. So, I booted a live cd and mounted the external /boot where /boot in the root volume is supposed to be. Basically, I think the problem is that I need to make my /boot (which is the only ext3 partition in the entire system and I want it that way) "relate itself" to the lvm root so that it boots into the system. As mentioned earlier, in the live CD, I made the external /boot mount itself in the root's /boot but I don't know how to tell the system to do this on its own while booting without my assistance. I chrooted from the live cd which involved a lot of tedious stuff but basically the important stuff I did were:
After a massive update including grub (not the problem) I cannot mount and boot because of the dubject error message. /root (dm-0) and swap (dm-1) are ok its just /home (dm-2) that appears broken.
I have a server that said a volume was dirty and to check it at reboot, so someone did a shutdown -rF now. Only problem is the other volumes are HUGE and it will take forever, which I cant have happen. The volume with the trouble is non-critical so I could take it offline and check it that way if i can get this to boot quickly. How can I do that if its going to auto check every volume on reboot now?
I have a centOS 5 box with 3ware 8 port raid cards.I run fsck.ext3 -y /dev/sdb1 and it shows as clean.But after writing to the FS for about 2 minutes, it becomes read-only.When I umont -l /data, and run the fsck I get that another program is using the system an I should wait.If I reboot the server , the array comes back as clean.
I am attempting to run a fsck on a number of large ext3 filesystems. I am doing this proactively because I want to minimize reboot time and the filesystems are past the interval time of 6 months. When I run the command " fsck -f -y device" I get the following error on all of the filesystems-
fsck 1.39 (29-May-2006) e2fsck 1.39 (29-May-2006) fsck.ext3: Device or resource busy while trying to open /dev/mapper/mpath0p1
I am hosting a few customer servers now, all of which are virtual machines running on a CENTOS 5.x host. Each Centos host has a couple of extra drives. When I formatted them ext3 they automatically had a schedule of a full forced fsck after 6 months. Do I really need to do that check regularly? It results in a fairly large outage since my disks are each 1TB and there are up to three extra drives on each server. I try to reboot these servers every 6 months but this part adds a large amount of time to a routine reboot.
I have a serious problem in booting centos 5.4 x86 as shown in the attached picture.I tried to backup before using fsck command, but I could not make a backup of damaged lvm on hard drive.First I made a rescue centos at virtualbox, and installed centos 5.4 x86 on virtual hard disk.And I attatched damaged hard drive. S I can see this damaged hard drive's lvm as attached picture.Please let me know how to backup my files and to use "fsck.ext3 --rebuild-tree using livecd".
I have a problem partitioning an Hard Disk Drive on a server, and I hope someone can help me with this. Here is the system configuration: Operating System Linux localhost.localdomain 2.6.30.8-64.fc11.x86_64 #1 SMP Fri Sep 25 04:43:32 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Hardware: RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)
The system mounts one hard disk (120 Gb) with the OS, and four 1.5Tb Hard Disks mounted on RAID 10 for a total of ~3 Tb I need to create several partitions on this RAID drive, but I have some trouble doing it. I need a total of 10 partitions of different sizes:
I know I can force a fsck run at next reboot using
Code: shutdown -rF now
or
Code: touch /forcefsck
can I force the fsck to more indepth checks such as doing directory optimisations, bad blocks checks, etc. maybe by passing parameters to the fsck call during startup?
fscheck is quite annoying, since it usually occurs when I reboot my system performing administration tasks. do I actually need fscheck if Im using the ext3 file system? if not how would I extend the period between checks or just turn it off altogether?
After a hardware failure (overheat of cpu)and solve the problem, some services (mysqld, openvpn,etc..) fails to start because "disk is full".some checks and discover that hda always report 100% used when is not full (i expect 70% are in use),deleting some log files (>100mb) for testing still reporting 100%.. i try use fsck but didn't correct the problem... anyone can help me ? Running CentOS 5.2 i386 with kernel 2.6.28.7, fs is ext3
root@XYZ / # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda1 18253680 17604732 0 100% / tmpfs 255872 0 255872 0% /dev/shm
I installed Debian on my PC and then installed Ubuntu. This worked fine and I could dual boot between the two. The PATA disk was /dev/hda on debian and (I think) /dev/sda on Ubuntu. I copied the entire disk to a sata disk using dd from knoppix and put the PATA one to one side. Now the Ubuntu comes up fine but when I boot debian, it complains about references to /dev/hda1, which is present in grub - root=/dev/hda1. Debian now expects sda references rather than hda references. How do I persuade Ubuntu to write /dev/sda1 to the bootloader rather than /dev/hda1 using grub-mkconfig?
I am running 9.04 Server (standalone). It had been running fine since I installed in last autumn. Upon reboot, fsck of the root filesystem was forced and it hangs at the same point (16.5%) every time. I was able to break out somehow with cntl-alt-del but the boot was to a read-only filesystem. So I couldn't disable the forced fsck. Instead, I tried to fsck there. It started, but hung. I couldn't do e2fsck -v as it needed the device and, although I worked on UNIX systems for decades, I am not familiar with the /dev/mapper stuff.
Looking at other threads, all involving the desktop GUI Ubuntu, I tried some of the suggestions. Went into the BIOS to see what I could disable. I killed the serial port and similar. (Some said that onboard modems interfered with the checks in /dev.) I also tried to boot from my original installation disk. That does work.The suggestion is to choose "Try without any change to your computer". The problem is that is not available on the server installation, apparently only the desktop (GUI). I had install, check CD for defects, test memory, boot from first hard disk, and something like repair or recover a broken disk. I started the last of them, as it seemed to be the only option. It failed because it couldn't get a dhcp address. I could manually configure it (as it is hard addressed anyway), but I didn't want to start screwing up configurations not knowing where it was going, whayt it would ry to do, and risk losing months of hard work.
Without help, I think I will be forced to install the OS on a second drive, use that install to fsck the original filesystem on the original disk, edit the fstab (or whichever has the config) on the original disk to disable fsck, and return to the original boot.I am building this server for a nonprofit and have put in many hours writing mysql/perl apache cgi code for them as a free service and hate to lose it all and set back everything.
After upgrading from opensuse 11.1 to 11.2 I get the following error messages while booting the system caused by the initial filesystem check routines:
ERROR: Couldn't open /dev/null (Too many open files) ext2fs_check_if_mount: Too many open files while determining whether ... is mounted. fsck.ext3: Too many open files while trying to open ...
I found a new version of the e2fsprogs at the OBS package claiming to fix this problem. But installing this new version did not solve my problem.
Here some information about the affected system: Operating System:openSUSE 11.2 (i586) Installed e2fsprogs:e2fsprogs-1.41.11-4.1.i586 Number of LVs:35 (all ext3)
I can only boot if I comment out some of the filesystems in my /etc/fstab. It seems that the number of filesystems must be less or equal 32.
16GB RAI've been running the Debian-based Proxmox VE on it for six months or so with no problems.Today I loaded Centos 5.5 x64. During a reboot, the file system crashed and fsck couldn't repair.I loaded it again, did all the updates, and loaded my applications. On about the third reboot, it crashed again and fsck couldn't fix it.I don't really know where to begin. I doubt seriously that any hardware has went bad since yesterday.
I have debian testing installed on my system. I have a separate home partition which i shared with Pardus2009 which i installed later on a different partition.
Now pardus boots fine without errors, but debian says "filesystem check failed" because last mount time of / is in future. it gives a maintenance shell and after a manual fsck corrects this problem, debian reboots fine, but then says theres a problem with the /home partition (but this can be ignored)
I have bunch of partitions on my Debian server installation, so I was experimenting with partitions and saw that is one partition fails fsck on booting time, system waits for root password or CTRL+D key combination. The problem is that my Debian machine is headless and I use only SSH to it. So if fsck fails, I can't to login to SSH (off course, because it is not loaded at this time). So I need to go with monitor and keyboard to machine and press CTRL+D.One option is to disable disk checking at startup by changing fstab file. I don't like this option. Is there any possibility to auto continue booting Debian machine ?
I want to be able to run fsck at, or near, shutdown at the end of the day, and not have to wait for it when booting (important now that I have 1TB drives!). As far as I can tell, the only way to arrange to run fsck on the root partition is if it is unmounted and I believe that only occurs at reboot time.
So, I thought of using the /forcefsck file that, when exists, will force file system check upon the next boot. So I envision having a script that touches that file, or issues the right shutdown command, then lets the system reboot and thus forcing a fsck of the root partition. However, I then want the system to turn right around and then shutdown, so that when I cold boot the system in the morning, I won't see the fsck run at that time, ever.
So I think this boils down to being able to run a one-time init script or something like that. Is there an established way or idiom for running an init script only one time? I know I can create a non-standard init-script that looks for a special file like is done for /forcefsck, and only shutdown if that file is seen, but surely someone else has already come up with a canned solution/init-script to what I want to do.
How to run a fsck on a mounted drive? I attempted unmount and it said no. I suddenly got an error 4 and trying to run a check, and it aborts with can't cause mounted.
I am running Wheezy 7.9 (32 bit) and using Gnome Classic desktop. I have recently had several issues with system "crashes" and such, see some of my recent posts, and for now things seem to be working okay. As part of my attempts to "fix" the problems I looked at the hard drive using the SMART disk utility, and also ran "smartctl". The SMART utility reports that there are 3 bad sectors on the drive. When I run fsck, from a live CD, it does not report finding bad sectors. So why would fsck not find something that is reported by smartctl? Which one should I believe?
As a precaution I am now making daily backups of my /home directory and purchased a new HD just in case. Have not yet installed the new HD but at least I am prepared.
I've just migrated to Debian from Ubuntu on my server. So my first post here. I've had some problems and questions when installing and configuring Debian, now everything seems very smooth, but still I can't find answer to one question. My machine is headless and I allways use SSH to it. I have several partitions on machine so if one of them fails on booting time with fsck, I need to press CTRL+D combination to continue. In that case I need to go to the machine with monitor and keyboard and press that combination. Is there any way to automatically continue for example after 30 seconds ? Then I could log in to the system and check log files what is wrong. How can I do that ?