I know if I do a shutdown -rF now, it will perform an e2fsck on all my volumes with the -y switch. But if I just want to check one of the volumes rather than all of them, and have it use the -y switch so it will automatically answer yes to everything, how can I do that?I'm using RHEL, and have a huge volume I need to run a check on, and I dont want to sit there for the next 24 hours hitting the Y key every time it finds a problem ;-)
I have two volumes, both with 800GB total used on them. lets call them /vol1 and /vol2. /vol2 is just a cron'd rsync'd copy of a folder on /vol1 which is a live share for many users.If I temporarily suspend the cronjob doing the rsync's from /vol1 to /vol2, is it safe to unmount and e2fsck /vol2, then remount it somehow?
Both /vol1 and /vol2 say the filesystem state is not clean when i do a tune2fs -l on them both. According to tune2fs both will check themselves upon restart, but if I can do /vol2 since it isn't the live data beforehand, that will cut my downtime in half the next time i restart the server.But I also wonder that if I can do this, then i remount /vol2, will the "not clean"-ness of /vol1 just be rsync'd back over to /vol2 the next time the rsync runs?
I have a heavily used file server that I want to restart, then if it requires e2fsck's on any volume to run them after it restarts. The only problem is that the server is rarely rebooted, and they said it might kernel panic because its been so long. I've heard there's a way to have it go past the kernel panic if it does happen, but I'm not sure how to do that or the other stuff.If it was a Windows server, I would schedule a shutdown with the force switch, and have the chkdsk's already scheduled for each volume on reboot. But for RHEL, I really don't know.I'm hoping this can be done, so that way I can have it kick off at say 7am, then when I get in at 8am it will probably be near the end of the e2fsck's so I can see what's going on.
We had to reboot a server in the middle of a production day due to 99% iowait (lots of processes in deep sleep waiting for disk iops). That's never happened to us before. It had been 363 days since the last fsck, so it started automatically on reboot. It hung at 4.8% on a 2TB LVM2 volume for about an hour. I killed the fsck and rebooted the server. The second time, it went past that point and is currently at about 62%. First, what causes e2fsck to hang like that? Second, is there any danger in killing e2fsck, rebooting, and starting it again?
I have some large volumes that I don't want to automatically be e2fsck'd when I reboot the server. Is it safe to change maximum mount count to -1 and check interval to 0 while a volume is mounted, or will that cause problems to the file system?
I'm running SUSE 11.3 with gnome (and pulseaudio, which I tried to get rid of, but for me it's just too much hassle to get audio working w/o pulse). The Master volume of my USB headset (2nd soundcard) is reset to 0 after each boot. In order to get it properly working, I have to run the YaST sound module and re-set the Master volume (which always is down to 0). Using alsamixer to do the same doesn't work (alsamixer shows no controls), probably because pulse grabs the device or does something else with it?
Pulseaudio version: 0.9.21-10.1.1.i586 Alsa version: 1.0.23-2.12.i586 Any input on what I can do to make the volume level stick?
It's been days that I've been with this and I'm getting to the end of my wits.
I tried to create a RAID-5 volume with three identical 1tb drives. I did so, but I couldn't mount the new volume after a reboot. Mdadm --detail told me that of the three drives, two were in use with one as a hot spare. This isn't what I wanted.
So I deleted the volume with the following commands:
Then I rebooted the machine, and used fdisk to delete the linux raid partitions and re-write an empty partition table on each of the drives. I rebooted again.
Now I'm trying to start over. I purged mdadm, removed the /etc/mdadm/mdadm.conf file, and now I'm back at square one.
Now for whatever reason I can't change the partition tables on the drives (/dev/sdb /dev/sdc /dev/sdd). When I try to make a new ext3 filesystem on one of the drives, I get this error:
"/dev/sdb1 is apparently in use by the sytem; will not make a filesystem here!"
I think that the system still thinks that mdadm still has some weird control of my drives and won't release them. Never mind that all I want to do even now is just make a RAID-5 volume. I've never had such difficulty before.
I am trying to create a link to my windows xp workgroup where all my data is stored (I was surprised that linux could even see it!) I mounted a volume on the desktop apparently... that worked fine until I rebooted and it had disappeared. it was fairly annoying that I had to go back into the network and re-mount the volume. How can I get it to stay put, even after rebooting?
I'm looking for insite on how it might be possible to grow an existingvolume/partition/filesystem while it's in active use, and without having to add additional luns/partitions to do it.For example the best way I can find to do itcurrently, and am using this in production, is you have a system using LVM managing a connected LUN (iSCSI/FC/etc), with a single partition/filesystem residing on it.To grow this filesystem (while it's active) you have to add a new LUN to the existing volume group, and then expand the filesystem. To date I have not found a way to expand a filesystem that is hosted by a single LUN.
For system context, I'm running a 150 TB SAN that has over 300 spindles, to which about 50 servers are connected. It is an equal mix of Linux, Windows, and VMware hosts connected via both FC & iSCSI... With both Windows & VMware, the aforementioned task of expanding a single LUN and having the filesystem expanded is barely a 1 minute operation that "Just Works".If you can find me a sweet way to seamlessly expand a LUN and have a Linuxfilesystem expanded (without reboot/unmount/etc)I have cycles to test out any suggested methods/techniques, and am more than happy to report the results for anyone else interested. I think this is a subject that many people would like to find that magic method to make all our lives much easier
I have a volume on my server that according to tune2fs is "clean with errors", so i'm assuming I either need to unmount the volume and e2fsck it, or reboot and drop into maintenance mode and do itThere aren't any live samba shares off that volume, so i'm thinking I could do it without taking the server down, as this server is only for samba shares, which are on a different volume.Could someone tell me if I'm taking the right approach? I've never done unmounting and mounting before, but I've read it can be done manually without affecting how the volumes are mounted when a server starts. i'll have to look up the commands.
Other than when there are errors in the messages log or when you have file system problems, when should you e2fsck volumes? I have a lot of volumes that have 500GB to 1TB of data on them, and it takes quite a while to e2fsck them, so wondering if its something that should be done regularly, or only when there are actually problems.
Why does an e2fsck restart itself after a while, does it get to a certain number of errors than has to start over from the beginning? are there any tweaks or switches you can use to make it run more efficiently?
The volume keyboard shortcuts on my Asus Eee 1008p resets on reboot.(going back to no shortcut at all). It works for the session, if i set it, but after reboot i have to set it again.
I added another disk in server and create mount point in fstab: /dev/VolGroup00/LogVol00 /opt ext3 defaults 1 2 Everything is working perfect... halt, boot, system... but when I wanna to reboot with a command sudo reboot, it hangs at the end of all initialization when it's rebooting and some number. If I remove disk in fstab, than reboot working.
I encountered problem on my NEW PROD box. I have remaining space of 300GB and i decided the create a /dev/mapper/VolGroup00 using Redhat Gui. It is successful. Then, i decided to create logical volumes out of it..
If I umount both of them, can I run an e2fsck on each at the same time through 2 putty sessions, or will that not really gain me anything from doing them one after another?
I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.
I am trying to connect the one of server RHEL5.4 to the IBM iSCSI storage. Server is equipped with 2 single port Qlogic iSCSI HBA(TOE). RHEL detected the HBA and installed driver itself (qla3XXX). I have configured the HBA ip address in the range of iSCSI host port of storage. Both of the HBA is connecting to the two different controller of storage. I have discovered the storage using command iscsiadm -m discovery command for both of the controller and it went through fine. But problem is whenever server is restarting if both of the hba is connected to the storage then server will not detect the volumes which is mapped to the server and then to detect the volume I need to run "mppBusRescan" and "vgscan" command each time. If only one path is connected it is fine.
I want to perform an e2fsck with the y switch (so I dont have to answer yes to every question) on two volumes on a server the next time I restart it. I don't want to do a shutdown -rF because 1) I dont want to check the other volumes and 2) it seems when I do that, the e2fsck doesn't keep restarting itself over and over to fix all the problems. Seems like it runs once, then if it fails it drops you to the repair console in single user mode. I'd rather just have it start the check that will keep repeating over and over right away, because I know it'll take more than one pass.
If I issue a shutdown -rF now, it will force e2fsck's on all the volumes when it reboots. But once the checks automatically finish, does it restart normally? I want to run e2fsck's on all my volumes, but dont want to stay the probably 5 hours it will run, so hoping someone knows for sure what happens.
I have a dell PE1750 server which would not boot up after a power failure. I am thrown to a shell for maintenance after showing an error in file system check. The server was running - Red Hat Enterprise Linux AS release 3 (Taroon). Please let me know if I can try to recover from this error by booting from the 1st CD of a higher version of linux like RHEL5. I ask this because I do not have the old media with which the system was setup. Can the use of latest OS CD cause any problem?
I'm rearranging a bunch of disks on my server at home and I find myself in the position of wanting to move a bunch of LVM logical volumes to another volume group. Is there a simple way to do this? I saw mention of a cplv command but this seems to be either old or not something that was ever available for linux.
I am using sda1 as /, which is a bootable drive. I do not know if my problem is that I did not create a /boot drive. After removing the iso dvd, I tried to reboot and I get this back: -bash: /sbin/reboot: input/output error Then it returns me to the terminal prompt.
I've had a look at some similar threads but as I'm very new to linux they're already a bit technical for me. Sorry, this calls for someone with patience. I gather from other threads that disconnecting an external drive without unmounting is a no-no, and this seems to be the likely cause. Now the disk is read only and I'm unable to change any settings through the usual control panel on ubuntu. I'm just not familiar with the terminal instructions. I tried to cut and past a few command lines from other threads but I got some warnings that proceding could damage data. Like this one: WARNING! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage.
HP 210 Mini Fedora 14 xfce 2.6.35.11-83.fc14.x86_64
I have inserted my handy drive. However, when I right click and select unmount I get the following message:
An application is preventing the volume "New Volume" from being unmounted
So I try from the command line:
umount /dev/sdb1
And I get the following message:
umount: /media/New Volume: device is busy.
All I have done is copied some files to my handy drive. So I am not sure what process is keeping my handy drive busy.Is there any command that I can use to see what process of anything else is using the handy drive?
I have Squeeze (2.6.32-5-686) installed on sda and have an additional disk sdb.For some reason 'dmesg' gives me always this message for sda1 (after a reboot):
Feb 14 12:29:03 arkiv-x kernel: [ 448.349949] EXT3 FS on sda1, internal journal Feb 14 12:29:03 arkiv-x kernel: [ 448.470411] loop: module loaded Feb 14 12:29:03 arkiv-x kernel: [ 448.653327] kjournald starting. Commit interval 5 seconds
My son's netbook with 10.10 netbook remix failed to boot. Using the Live install CD and Gparted I couldn't repair the EXT4 filesytem. The error reported was:
e2fsck : Device or resource busy while trying to open ...
After trying many solutions and web searching I decided to try a different live CD and tried Knoppix 6.4.4
Using the command interface I typed e2fsck -v -f -y /dev/xxxx (xxxx = your device). This worked first time and the machine rebooted without hesitation.
We have an old server running, and I decided to run fsck.ext3 -n on the disk to check it (while it was running). Turns out it reports lots of errors - not a good thing.
The weird thing is that when booting up a rescue cd and running fsck.ext3 on it, it says there are no problems with it. The filesystem is marked clean. Forcing a check with -f turns up nothing.
Now, when booting it from disk, fsck complains about an unclean file system that has not been checked for like 50000 days (obviously an error). Running e2fsck -n /dev/sda2 turns up errors again - not necessarily the same ones as the last time.
This makes me wonder: Can running e2fsck on a mounted file system cause errors? I ran with -n which is not supposed to do anything, just doing a read-only check. On the other hand, I heard checking a live file system might throw erros since the files being checked might change while bign checked, thus causing false positives.
Can the old version of e2fstools (1.38, approx 2005) mean non-existing errors are shown? Both the rescue cd and the system use this version.
In any case - why would the file system report errors on boot-up when the rescue cd just said it was ok? It should have been marked clean by now.
For laughs, I shut down the system and booted Knoppix which has a quite recent version (1.41.12, May 2010) of e2fstools. It showed no errors on the file system.
What do you think - are there errors or not on the file system?
The system is actually running Suse, but this is not about Suse specific things - just general Linux tools. And I use Ubuntu personally.