Ubuntu Servers :: 10.4.1 Hang On Reboot After Mounting LVM Volume
Jan 4, 2011
I added another disk in server and create mount point in fstab:
/dev/VolGroup00/LogVol00 /opt ext3 defaults 1 2
Everything is working perfect... halt, boot, system... but when I wanna to reboot with a command sudo reboot, it hangs at the end of all initialization when it's rebooting and some number. If I remove disk in fstab, than reboot working.
I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.
fairly new to linux and tried Slack as a way to force myself to learn. I am running current and when i issue the reboot command it will hang on "Restarting system". If i use the shutdown -r now command it will reboot fine. Any ideas?
Code: Select allshutdown -h now reboot shutdown -r now halt init 0 init 6
And all hang on the same line. This is 100% reproducible. I am not actually running a virtual machine. I don't have qemu-kvm installed. I do have separate partitions on my system. I have a /boot, /, swap, and /home partition.
From looking at other posts: [URL] .....
Solutions tend to be across the board: not unmounting properly, acpi settings in grub, using a different shutdown command.
My fstab file is:
Code: Select all# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda2 during installation
[Code] ....
and the result of Code: Select allmount is
Code: Select allsysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=498135,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,relatime,size=800408k,mode=755)
School with linux running on students' laptops, connecting via wlan to a Debian NFS and LDAP server. Every student logs on his/her profile residing on the NFS server.The clients are set up with autofs. Earlier, I had set up the wireless network in /etc/network/interfaces, but this time I decided to configure network manager so as to bring up both wireless and wired network before logon. This setup has been working on for the last fire or five years with only minor changes. Also worked with Karmic Koala, but still with the interfaces file instead of networkmanager. The Vostro is also new here, we've previously used mostly Dell Latitude D505s.
So here is what works:
1: Clients can log on to LDAP and NFS servers both wired and wirelessly. Everything is smooth.
2: While on LAN, shutdown and restart works flawlessly (and quick as a breeze, I'm really impressed by startup/restart/shutdown times, under 25 secs!).
3: Shutdown and restart also works wirelessly when doing it either from a local account or from the GDM chooser.
What doesn't work, however, is shutting down or restarting directly from a networked account connected while only being connected over the wireless network. This is what's being displayed on the terminal after it has tried tho shut down for a while:
Code:
The system is going down for halt NOW!
acpid: exiting init: cron main process (1011) killed by TERM signal. init: tty1 main process (1365) killed by TERM signal.
[code]...
If I try ctrl-alt-del at this stage, it says:
"init: rc main process (3030) killed by TERM signal"
"Checking for running unattended-upgrades: "
And then it will hang again, until I hold the powerbutton for some seconds. The unattended-upgrades part is what seems to be the culprit. I suspect it is about the wireless network not being connected any longer or something like that, but I'm not sure about how to go about debugging shutdown scripts here. I'd be grateful for pointers. I will try and see how it goes with the old interfaces file setup, but I'd rather make nm work.
This is probably not possible, but just wondering, restarted centos 5.2 remotely, hung at shutdown, kvm'ed to see what was hanging it up, control c'ed and then ctl alt del, just trying to kill the process, was brought back to the login screen, but unable to type anything. Alt-f1 switched to a different virtual console, which was just a blank screen and a blinking cursor, but now able to type things. Want to avoid hard shutdown. Is there anyway to force shutdown/reboot the machine from this console?
It's been days that I've been with this and I'm getting to the end of my wits.
I tried to create a RAID-5 volume with three identical 1tb drives. I did so, but I couldn't mount the new volume after a reboot. Mdadm --detail told me that of the three drives, two were in use with one as a hot spare. This isn't what I wanted.
So I deleted the volume with the following commands:
Then I rebooted the machine, and used fdisk to delete the linux raid partitions and re-write an empty partition table on each of the drives. I rebooted again.
Now I'm trying to start over. I purged mdadm, removed the /etc/mdadm/mdadm.conf file, and now I'm back at square one.
Now for whatever reason I can't change the partition tables on the drives (/dev/sdb /dev/sdc /dev/sdd). When I try to make a new ext3 filesystem on one of the drives, I get this error:
"/dev/sdb1 is apparently in use by the sytem; will not make a filesystem here!"
I think that the system still thinks that mdadm still has some weird control of my drives and won't release them. Never mind that all I want to do even now is just make a RAID-5 volume. I've never had such difficulty before.
I am trying to create a link to my windows xp workgroup where all my data is stored (I was surprised that linux could even see it!) I mounted a volume on the desktop apparently... that worked fine until I rebooted and it had disappeared. it was fairly annoying that I had to go back into the network and re-mount the volume. How can I get it to stay put, even after rebooting?
I have multiple ubuntu machines and I connect to one through an NFS share. I have done this for a few years without issue. However, since re-installing ubuntu and upgrading to 10.4 I have a problem with my system hanging when the remote shares are lost.
Basically, I can power down the machine downstairs, and my main machine then has a fit. I can not open any folders in ubuntu, nor can I shut down. If I try and shut down the system hangs, last time it hung for 8 hours before I had to kill the power.
These are the lines in my fstab
I don't know what I've done wrong, or how I can prevent this from hanging. I have googled the heck out of this as well and can't seem to find an answer either.
I used Wubi to install Ubuntu 10.10 onto my laptop alongside Windows 7. I need to access my windows harddrive, however, so I used NTFS Configuration Tool to mount the drive. However, whenever I reboot, it fails to mount and I actually have to go back into NTFS Config Tool, delete the old mount, and remount it. This is tedious. My /etc/fbstab file looks as follows:
Does anyone have any good articles on mounting a Ubuntu volume as an iSCSI share on a windows box? Originally I was just going to use a SAMBA share but it turns out samba has issues with my lan security. So I thought since all I really want to do is create the share on my backup server that an iSCSI device would do. Been using the following article with limited success... [URL]
Whenever I mount any of the ntfs volumes, instead of opening the volume, Rhythmbox starts automatically and scans volume for media files. It has nothing to do with media files as it starts even when there is no media file in the volume.
Alternate CD on USB - Natty not mounting Encrypted Volume I get initramfs prompt. I have a Dell Inspiron Duo. I've tried to install Natty i386 and AMD64. I set my / (root) and swap under LVM under an encrypted volume. Used manual partitioning. But after reboot, I successfully enter the passphrase, swap and root are not mounted.
Now, I've had this working with 10.10. System seemed a little quirky after the upgrading it to Natty. So, I wanted a fresh install. Used Unetbootin to run ISO from USB and also from one of my other partitions. I've tried installing at least 10 times, some repeat, some variations.
The volume keyboard shortcuts on my Asus Eee 1008p resets on reboot.(going back to no shortcut at all). It works for the session, if i set it, but after reboot i have to set it again.
I'm running SUSE 11.3 with gnome (and pulseaudio, which I tried to get rid of, but for me it's just too much hassle to get audio working w/o pulse). The Master volume of my USB headset (2nd soundcard) is reset to 0 after each boot. In order to get it properly working, I have to run the YaST sound module and re-set the Master volume (which always is down to 0). Using alsamixer to do the same doesn't work (alsamixer shows no controls), probably because pulse grabs the device or does something else with it?
Pulseaudio version: 0.9.21-10.1.1.i586 Alsa version: 1.0.23-2.12.i586 Any input on what I can do to make the volume level stick?
I know if I do a shutdown -rF now, it will perform an e2fsck on all my volumes with the -y switch. But if I just want to check one of the volumes rather than all of them, and have it use the -y switch so it will automatically answer yes to everything, how can I do that?I'm using RHEL, and have a huge volume I need to run a check on, and I dont want to sit there for the next 24 hours hitting the Y key every time it finds a problem ;-)
I have a raid 5 with 5 disks, I had a disk failure which made my raid go down, after some struggle I got the raid5 up again and the faulty disk was replaced and rebuilt itself. After the disk rebuilt itself I tried doing a pvscan but could not find my /dev/md0. I followed some steps on the net to recreate the pv using the same uuid then restored the vg(storage) using a backup file. This all went fine.I can now see the PV, VG(storage) and LV's but when I try to mount it, I get a error "wrong fs type" I know that the lv's are reiserfs filesystems, so I did a reiserfsck on /dev/storage/software, this gives me the following error:reiserfs_open: the reiserfs superblock cannot be foundNow next step would be to rebuild then superblock, but I'm afraid that I might have configured something wrong on my raid or LVM and by overwriting the superblock I might not be able to go back and fix it once I've figured out what I didn't configure correctly.
I have a server that has 2 dirty volumes, both of which are very large. One volume contains live data, the other is just a rsync'd copy of that data, which isn't critical to the users. The e2fsck is taking forever in single user mode, so i'm wondering if there is a way after the volume with live data becomes clean from e2fsck fixing everything, if i can boot the server and have it skip mounting the other dirty volume (/dev/md1) just this once, so i can get the server up with the live data available to users. Then with /dev/md1 unmounted with the server up, I should be able to e2fsck that until it comes back clean, then do a mount /dev/md1,Please let me know how I could do this, I'm running RedHat if that matters. I'm quickly running out of time here,
I'm looking for insite on how it might be possible to grow an existingvolume/partition/filesystem while it's in active use, and without having to add additional luns/partitions to do it.For example the best way I can find to do itcurrently, and am using this in production, is you have a system using LVM managing a connected LUN (iSCSI/FC/etc), with a single partition/filesystem residing on it.To grow this filesystem (while it's active) you have to add a new LUN to the existing volume group, and then expand the filesystem. To date I have not found a way to expand a filesystem that is hosted by a single LUN.
For system context, I'm running a 150 TB SAN that has over 300 spindles, to which about 50 servers are connected. It is an equal mix of Linux, Windows, and VMware hosts connected via both FC & iSCSI... With both Windows & VMware, the aforementioned task of expanding a single LUN and having the filesystem expanded is barely a 1 minute operation that "Just Works".If you can find me a sweet way to seamlessly expand a LUN and have a Linuxfilesystem expanded (without reboot/unmount/etc)I have cycles to test out any suggested methods/techniques, and am more than happy to report the results for anyone else interested. I think this is a subject that many people would like to find that magic method to make all our lives much easier
After having problems with lxde crashing while running Jessie, and re-installing Wheezy, I am not able to mount my WinXP drive. In the past I was able to run pcmanfm and mount the drive from there. It would ask for my root password and then would mount the drive. Now, however, when I click on the drive icon it gives me an error message saying authentication required.
One thing is that when I installed Wheezy I had the WinXP drive disconnected so as to not inadvertently install Wheezy on the wrong drive (I have two identical drives). After installing I connected the WinXP drive and then did a grub update. I can boot either drive, as expected, but I can not mount the WinXP drive from pcmanfm. Do I need to change the Policykit?
I encountered problem on my NEW PROD box. I have remaining space of 300GB and i decided the create a /dev/mapper/VolGroup00 using Redhat Gui. It is successful. Then, i decided to create logical volumes out of it..
I was having this issue with my server when I tried upgrading (fresh install) to 10.04 from 9.04. But to test it out after going back to 9.04 I installed 10.04 server in a virtual machine and found the same issues. I was using the AMD64 version for everything. In the virtual machine I chose openssh and samba server in the initial configuration. After the install I ran a dist-upgrade and installed mdadm. I then created an fd partition on 3 virtual disks and created a RAID5 array using the following command:
This is the same command I ran on my physical RAID5 quite a while ago which has been working fine ever since. After running a --detail --scan >> mdadm.conf (and changing the metadata from 00.90 to 0.90) I rebooted and found that the array was running with only one drive which was marked as a spare. On the physical server I kept having this issue with one drive which would always be left out when I assembled the array and would work fine after resyncing until I rebooted. After I rebooted the array would show the remaining 6 drives (of 7) as spares.
I updated mdadm to 3.1.1 using a deb from debian experimental and the RAID was working fine afterward. But then the boot problems started again. As soon as I added /dev/md0 to the fstab the system would hang on boot displaying the following before hanging:
I am trying to connect the one of server RHEL5.4 to the IBM iSCSI storage. Server is equipped with 2 single port Qlogic iSCSI HBA(TOE). RHEL detected the HBA and installed driver itself (qla3XXX). I have configured the HBA ip address in the range of iSCSI host port of storage. Both of the HBA is connecting to the two different controller of storage. I have discovered the storage using command iscsiadm -m discovery command for both of the controller and it went through fine. But problem is whenever server is restarting if both of the hba is connected to the storage then server will not detect the volumes which is mapped to the server and then to detect the volume I need to run "mppBusRescan" and "vgscan" command each time. If only one path is connected it is fine.
so this is really the result of another problem. There seems to be an issue with CPU spiking to 99% forever (until reboot) if I run apt-get or synaptic or update manager while an external USB drive is plugged in. Note, other USB peripherals are no problem, just an external HD.
So my work around was to eject the drive when doing apt-get or other installation work, then reattaching it to remount. Now, on to the present problem. I'm using the basic backup script (the rotating one) found in the Ubuntu Server manual. It uses tar and gzip to store a compressed version of the desired directories on my external USB. (which sits in a fire proof safe - this is for a business)
However, it seems tar and gzip which run nightly 6 days a week via cron as root, don't ever want to die, and they don't release the drive. I have to reboot the system (I can't logoff) to release the drive, unplug it, the I can do update/install work.
Of course, if apt etc. would work fine without conflicts with the external device, I'd not care about the tar/gzip problem other than it generally isn't a proper way for them to function and it chews up some CPU cycles. (they run about 0.6 and 1.7 percent respectively) I also can't kill them via kill or killall. They seem undead.
I have some large volumes that I don't want to automatically be e2fsck'd when I reboot the server. Is it safe to change maximum mount count to -1 and check interval to 0 while a volume is mounted, or will that cause problems to the file system?
I have 2 hard drives on mu box 1st one is 500.0 MB ext4 Volume where I have my syste FC 13 and a 2nd one where I put my database files as follows 78.1 GB ext4 Volume usage = filesystem, format ext4
ih file browser, I can see an icon for a 80GB hard drive but whenever I double click I get the following
Quote:
Error mounting volume: An error occured while performing an operation on data Partition 1 of ATA Maxtor <: <the operation failed
clicking details
Quote:
Error mounting: mount exited with exit code 1: helper failed with: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error
when I type
Quote:
dmesg | tail
I get
Quote:
# dmesg | tail [drm] nouveau 0000:01:00.0: Allocating FIFO number 3 [drm] nouveau 0000:01:00.0: nouveau_channel_alloc: initialised FIFO 3 [drm] nouveau 0000:01:00.0: Allocating FIFO number 4
What is very strange is that mysql works fine.In disk utility, it indicates that disk is healthy, but when I click check file system i get
File system check on "data" (Partition 1 of ATA MAXTOR STM380215A) completed File system is NOT clean
I have a problem in my ubuntu 10.01 that it can't load a drive/volume in ubuntu. When I tried, it said: "Unable to mount location Error mounting: mount: /dev/sda1: can't read superblock". And when I boot my pc with 'Windows', it said : "UNMOUNTABLE_BOOT_VOLUME" under a blue screen. What can I do to solve this problem?