I have pc/os linux 2009 installed and I recently got the following message while trying to boot up my system:
*checking root file system...fsck 1.41.4 (27-Jan-2009) dev/sda1 contains a file system with errors, check forced. /dev/sda1:Inodes that were part of a corrupted orphan linked list found. /dev/sda1:Unexpected inconsistency;run fsck manually. (i.e., without -a or -p options) fsck died with exit status 4
*An automatic file system check (fsck) of the root filesystem failed. A manual fsck must be performed, then the system restarted. The fsck should be performed in maintenance mode with the root filesystem mounted in read-only mode.
*The root filesystem is currently mounted in read-only mode. A maintenance shell will now be started. after performing system maintenance press Control-D to terminate the maintenance shell and restart the system.
Give root password for maintenance:
The problem is, when I enter my password I get an incorrect password prompt. How can I change my password so that a manual fsck can start? Why did this message error message appear in the first place?
Is it possible to run fsck on the root file system? My Ubuntu 10.04 seems to be checking it's fs at boot... It shows that the file system is in use and can get severely damaged! Or the only possibility is to run it from a live CD?
I would like to know if there is a way to do an unattended check on the root file system on my servers, *and* send emails in case of errors.
I know you can schedule a root file system fsck during boot time - but the root file system will be mounted read-only - so if fsck finds any problems - it can't email away a warning, or write the result to a file - or can it?
Essentially I would like my servers to do a self-check of the root file system periodically - and to email me if it fails. I just can't think of a way to get it done.
I got 2 HDDs, I had created LVM by two of them. When I was trying to install quota it was saying it is better if you run fsck first message. When I tried to run fsck. It warned me that I could lose some of my data. So it happened. Actually it is worse: I can't boot my Fedora 11. When I try to run installer in rescue mode, it says no linux partition find. When I try to install (just to see partitions) it shows LVM volumes of hdds are ok but the partition which is / (root)partition seems in unknown format. How can I save my datas? Or can i restore my partitions, LVM?
UPDATE: This is a bug: [URL] Evidently the problem is with plymouth because a workaround is to add "rd_NO_PLYMOUTH" to the kernel boot options. I don't get a prompt for my disk encyrption pass phrase---just a flashing cursor---but that's a small price to pay for being able to run fsck when the root filesystem wasn't umounted properly.
I have fully updated f13 (as of today) on a laptop with all ext2 file systems (It has nothing but flash memory.) If it's shut down without unmounting all file systems, it drops to a shell and asks for the root password to run fsck when it's rebooted. Every key press is treated as though it were <enter>, with a response to the effect that the password is incorrect.
We have a server for which the root password had been lost, and there were no other user accounts set up. Yesterday evening I attempted to reset the root password by booting from the install CD and using VI to clear the root password in the passwd and shadow files. I then rebooted, and the system has halted with an 'FSCK failed. Please repair manually and reboot' error, with a prompt to 'Enter root password' below. But of course the root password isn't known (I had expected it to blank after editing the passwd and shadow files, but it doesn't work), so I have no way of logging on.
loading /usr/bin/teclafsck failed. please repair manually and reboot. the root file system is currently mounted read-only. to remount it read-write do:bash# mount -n -0 remount,rw /attention: only control-d will reboot the system in this maintence mode shutdown or reboot will not work.
I have a server that said a volume was dirty and to check it at reboot, so someone did a shutdown -rF now. Only problem is the other volumes are HUGE and it will take forever, which I cant have happen. The volume with the trouble is non-critical so I could take it offline and check it that way if i can get this to boot quickly. How can I do that if its going to auto check every volume on reboot now?
Today I did an update on my Debian 6.0.0 installation, which included a kernel update. I also did an Nvidia driver update, afterward. All well, until upon a third boot up an fsck check was forced (34 mounts). Some odd behavior ensued. At about 19% it exited the check, printed what appeared to be a load of inode numbers with error messages, and "froze" with the caps lock and num lock lights on the keyboard flashing.I did a couple of grub recovery modes, each which advanced a little further, and all is well again. However, it concerned me. Where in var/log do I find the file of that particular fsck activity?
RHEL 5.4. I'm facing the following error after rebooting the server: /dev/VolGroup01/u04: UNEXPECTED INCONSISTENCY Run fsck Manually" *** An error occurred during the file system check. *** Dropping you to a shell: the system wil reboot *** when you leave the shell. give the root password for maintenance:
-Previously I performed a lvreduce command on a LV, after the lvreduce, I reboot the server. -After login as root I run: e2fsck -f /dev/VolGroup01/u04
But, it shows: The filesystem size (according to the superblock) is 5218304 blocks The physical size of the device is 1310720 blocks ... either the superblock or the partition table is likely to be corrupt abort<y>? no pass 1: cheking inodes, blocks, and sizes error reading block 1310722 (invalid argument) while doing inode scan inore error <y>? y
-Additionally, trying to lvdisplay, it shows: Locking type -1 initialization failed I have no important data on that LV, but I can not boot the server properly.
I want the filesystem of my external drive to be checked periodically after a numer of mounts. I put 2 in the sixth colums of fstab for this partition
Code: /dev/sdb1/mnt/hdext3rw,dev,sync,user,noauto,exec,suid02 and I use the tune2fs to set the maximum mount count to 32. Code: tune2fs -c 32 /dev/sdb
now the mount count is 34 and the date of the last check is not recent, so apparently the auto fsck has not been performed. Probably because this partition is not mounted at start-up but I usually mount it manually.
I wanted to know how can I make fsck automatically fix HDD errors when it needs to check HDD during boot up and do not ask user for it . I found some of fsck related scripts in /etc/init/ and /etc/init.d/ but I don't know what to do with them.
I have a server with Debian on it that I regularly reboot after upgrades. Sometimes (on schedule) fsck will check a disk when the computer is booting. With the exception of sitting in front of the console to observe the fsck, how can I determine the difference between a problematic halt and an fsck (besides waiting out the fsck, hoping it is an fsck)?When I send the computer down to reboot, I will usually have a terminal window open pinging the computer so I know when it has come back up. My first thoughts drifted to fantasizing about hacking fsck to respond to pings with some special magic byte so you could tell via ping that a computer was fsck-ing, but I'm thinking there have got to be easier ways..
I have two ext3 lv's of 4GB and 10GB in my hda8 partition, and they are automounted by /dev/mapper/ in my /etc/mtab files in each of the four distros (Suse9.3, OpenSuse10.2, kubuntu7.04 and Debian Lenny 5.0.3). Since ext3 is a journalled fs I feel I ought to fsck their integrity every 3 months or so, however I don't know
a) whether they must be unmounted before running fsck, b) whether I should use a live CD such as knoppix to run the fsck command, and c) whether I can and/or should run fsck /dev/hda8, or whether I should somehow fsck each lv seperately?
I created a chroot jail in /SECURITY/Jail. But when I used the command 'sudo chroot /SECURITY/Jail' to enter the fake root, I got an error message likegroups: cannot find name for group ID 105groups: cannot find name for group ID 119.
i just installed linux mandriva 2009. i set password for root and created a user account. when i try to login as root, after logging out as user, it does not allow me and gives the error "root logins are not allowed". even it does not show the root account. if i try to go to root from konsole terminal using su root, it allows to enter as a root but when i try to start the GUI with startx it gives error.not sure what to do and why i can't see my account in GUI mode