Red Hat :: RHEL AS4 Not Booting Up - Volume Group Not Found
Jan 18, 2010
I have run into some serious problems trying to start up RHEL AS4. I am trying to install Oracle on this box which by the way is running as a guest OS through VirtualBox. I was running with it fine yesterday. I was doing the prerequisites of updating the /etc/sysctl.conf file with additional kernel parameters. Prior to that I added another scsi virtual disk and extended a PV and left it at that for the night. I am since come back today and trying booting and am getting the following errors:
Volume group "VolGroup00" not found;
Couldn't find all physical volumes for volume group VolGroup00;
Couldn't find device with UUID "Some UUID of the device";
ERROR: /bin/lvm exited abnormally! (pid 318);
mount: error 6 mounting ext3;
mount: error 2 mounting none;
switchroot: mount failed: 22
umount /initrd/dev failed: Z
kernel panic - not syncing: Attempted to kill init!
I have tried running in rescue mode and it can find volume group VolGroup00. I also have installed Storage Foundation which includes veritas volume manager and other various veritas components. Does anyone have a clue what I am supposed to do here to get RHEL up and running again? I am running kernel version 2.6.9-42.EL if that helps as well.
Dual PII 400, 512Mb with a Promise SuperTrak 100 IDE Array Controller. At present I have only one drive on the controller, configured for 1 JBOD array. I install FC9 with no problem. New partition is created and formatted, Grub is installed, and then... Grub is found and booted, but then I get:
Reading all physical volumes. This may take a while... No volume groups found Volume group "VolGroup00" not found Unable to access resume device (/dev/VolGroup00/LogVol01) mount: could not find filesystem '/dev/root' I can boot in rescue mode, chroot to the installed system. I changed the kernel boot parm "root=/dev/VolGroup00/LogVol00"
Before creating this topic I googled a lot and found lots of forum topics and blog posts with similar problem. But that did not help me to fix it. So, I decided to describe it here. I have a virtual machine with CentOS 5.5 and it was working like a charm. But then I turned it off to make a backup copy of this virtual machine and after that it has a boot problem. If I just turn it on, it shows the following error message:
Activating logical volumes Volume group "VolGroup00" not found Trying to resume from /dev/VolGroup00/LogVol01 Unable to access resume device (/dev/VolGroup00/LogVol01) ... Kernel panic ...! During the reboot I can see 3 kernels and if I select the 2nd one the virtual machine starts fine, it founds the volume group etc. (But there is also a problem - it can not connect the network adapters.) So, it is not possible to boot it with the newest kernel (2.6.18-194.17.1.el5), but it is possible with an older one (2.6.18-194.11...)
I looked into GRUB's menu.lst and it seems to be fine. I also tried #mkinitrd /boot/initrd-2.6.18-92.el5.img 2.6.18-92.el5 no luck! Yes, I can insert DVD .iso and boot from it in "linux rescue" mode.
This is a problem about linux-kernel-3.16-0-4-amd64 and LVM, I guess. I decided to write this here in case other users who installed their debian system with encryption enabled experience this problem with a recent kernel upgrade.
I use debian jessie. Today I gave the command:
Code: Select allapt-get upgrade
There was a linux-kernel upgrade to 3.16-0-4-amd64 among other packages to be upgraded.
After this upgrade my computer cannot boot anymore.
I get following error:
Code: Select allVolume group "ert-debian-vg" not found. Skipping volume group "ert-debian-vg" Unable to find LVM "volume ert-debian-vg/root" Volume group "ert-debian-vg" not found Skipping volume group "ert-debian-vg" Unable to find LVM "volume ert-debian-vg/swap_1" Please unlock disk sd3_crypt:
And it does not accept my password.
I used rescue environment on debian jessie netinst iso and decrypted the partition and took a backup of my /home. Now I have not much to lose if I reinstall my system but I still want to fix this problem if possible.
I have reinstalled the kernel using debian jessie netinst rescue iso but nothing changed.
I have Timeshift snapshots located at /home/Timeshift but timeshift --rescue command cannot find a backup device, it sees the device as crypted. If I could restore a snapshot it would be very easy to go back in time and get rid of this problem. It would not be a real solution, however.
There is not any old kernel option in GRUB menu. So removing the latest one does not seem as an option.
I am trying to extend my / size as its full. Well the volume group is VolGroup00 & logical volume is LogVol00 but when. I run the command vgextend VolGroup00 /dev/sda8. It says volume group not found. Can it be because I have WindowsXP in my /dev/sda1, which falls under same Volgroup??
I have a dual boot system running FC8. I had a problem with an uninstaller on the Windows side that affected both the drives on the Windows and Linux sides. It froze my Linux archive drive. I removed the drive, and was able to boot the Windows side from a recovery disk, but not through Grub. I can't access Grub, but can start the Linux boot process with SuperGrub. The boot process gives the following errors:
No Volume groups found. Volume group "VolGroup00" not found. mount: could not find file system 'dev/root'.
This is my development virtual machine, I had Centos 5.3 on it and a lot of stuff I really can't afford to lose.
First off I'd like to say if there is any method anyone knows of recovering information off a VMWare disk, I'd love to trash it. I've already tried the drive mapping in VMWare workstation but only the boot partition comes up and I need /home and /etc.
I was screwing around with the LVM size of the disk and totally screwed it up. I can see the two partitions on /dev/sda, /dev/sda1 as the small partition and /dev/sda2 as my large data partition (the one I need access to).
If anyone could guide me into getting my data off or rebuilding my LVM to get it booted again that'd be amazing.
I already know "what" my problem is, however I am having difficulty fixing it. I recently upgraded our companies server to a HP ML150; decided to upgrade to FC10 hoping it would go smooth and it is not. It does not detect the SATA drives after the installation.
I get.
"Volume Group "VOLGroup00" not found Unable to access resume device (/dev/VolGroup00/LogVol01 mount: error mounting /dev/root on /sysroot as ext3: No such file or directory
I know the problem is that my SATA is not enabled in the kernel or grub, but I don't know how to fix this. My internet searches are coming up a little short and LIVE discs are not working so I am having trouble figuring this out.
I don't know much about lvm and I've managed to screw up a drive. I had a 500GB drive with FC14 on it and I wanted to copy over a load of data to my new 1TB that was replacing it. I set up my new install the same way as the old...including the same volume names (error number 1 I think) I successfully mounted the old/500GB drive (using vgscan and vgchange -a y etc.) using a laptop (running FC13) and an external hdd cradle. I could access the files I wanted but this wasn't the machine I wanted to copy them to (I was doing this while waiting for the install to finish on the new drive).
When I tried the same process on the new install I found that having two lvm with the same name meant I couldn't mount the external one. So I opened the disk utility (palimsest) and was going to change the name of the old volume group but it wouldn't let me do that. I then thought maybe I could get away with just changing the name of the partition where the files were and maybe I could add it to the mounted group or something so I changed it to lv_home2. This changed the name of my new/1TB lv_home to lv_home2 as well. So thinking that wasn't the answer I just changed the name of the new lv_home2 back to lv_home.
From that point on I haven't been able to see the old drives partitions (the new volume group still works so far). I has a physical volume but the volume group and volume names are gone from view. When I try to vgscan on my main computer or the laptop I had it working on earlier I get:
I'm rearranging a bunch of disks on my server at home and I find myself in the position of wanting to move a bunch of LVM logical volumes to another volume group. Is there a simple way to do this? I saw mention of a cplv command but this seems to be either old or not something that was ever available for linux.
I have one HD and a VG spanning it's entirety. I resized a LV and freed up 10GB within the VG, but I want the 10GB outside the boundary of the VG so I can install another OS for testing purposes. For some reason I'm not able to do this. I don't know if I understand LVMs correctly. Maybe there can be only one VG on a HD?
i have created two physical volumes, later added volume group to it and then created logical volume and formated the logical volume n mounted it on directory now now i wanted to split the volume group but am unable to do it.If i tries it error msg displayed as existing volume group is active and i have to inactive that volume group
After months of using Lenny & Lucid Lynx without issues, I come back to the good existential questions.
I'd like a completely encrypted disk (/ and swap) in addition to the Xp partitions (not that safe but I'll switch completely to Linux once I have solved everything.
1. I create an ext4 partition for /boot 2. One other (/dev/sda7) that I set for encryption, 3. On top of that, I create a PV for lvm2, 4. I add to a VG, 5. I create / & swap in the VG.
However, if I add a hard drive, I will have to encrypt the main partition, add it to the VG & then expand /. So I'll need 2 passwords at boot time to decrypt.
So I'd like to:
-Encrypt the VG directly, it would solve everything but no device file appears for the VG, only the PV and th LV.
-After hours of search, I couldn't find a solution for a single password...
Maybe a hope with a filesystem like btrfs in the future providing encryption, but I'll still have to create a swap partition out of it (or create a file for swap but no hibernation possible).
I am not familiar with LVM at all, although I have successfully got it up and running in Slackware. What I would like to know is, could I create one Volume Group in a Physical Volume consisting at the moment of just one disk, and install separate Linux releases into Logical Volumes in this solitary VG? So, for example:
I have RHEL3 on of my old system.the OS is not start because the Volume group is not exist. The single mode is not work either.the onlly shell i have is by rescue mode.and either the sysimage is not exist!should i use vgimport command?how ?I never use this command before?!?! Part of the error that i received during start the OS is :
vgscan --ERROR "vg_read_with_pv_and_lv():current PV cant get data of volume group "vg00" from physical volume(s) vgchange -- no volume groups found
I'm running debian and used mdadm to setup up a raid 6 array with 4x1TB drives with roughly 1.86TB's available with lvm. Then I added 4x1TB drives to the array. So now I have an 8 drive raid 6 array with 5.+TB's available, the array sees all available space. The question is how do I extend the volume group so that it uses the whole raid and not just half of it. As of right now the volume group is only 1.86TB's.
I'm trying to do a disk upgrade on some servers. They are using LVM with DRBD on top and each LVM volume contains a Xen image. I have already created identical volumes on another volume group, copied the data and pointed DRBD to the new source (Which seems to have worked).
What I am unsure of is how to safely remove the disks. The disks are an Areca Raid 1 array and support hotswap. Can I just pull them out of the machine or is some sort of command needed to tell LVM or the kernel to disconnect from the physical array device? Is removing the raid array from the Areca management GUI first a good idea?
i decided to install ubuntu in my PC,i downloaded the .ISO image and i installed it in my USB. After trying it and all that i observed that i really liked it and i decided to formally install it to my computer in the hard drive. When i reached the partition thing,i selected to dual boot with Vista and select between each them in every startup,when i clicked FORWARD it gave me an error which i did not read(because,again im a noob) so i clicked cancel.
Today i wanted to go through the process again and now really install it,so again i went to the time zone part and i clicked forward but then,instead of taking me straight to the partition phase,it appeard a window saying "The installer has detected that the following disks have mounted partitions: /dev/sda ...." I clicked yes,to unmount this partitions so it took me to the partition thing,once there i selected the option to install Ubuntu with Vista and select between them i neach startup,then i clicked forward and went to the username/computer name process,once i finished i continued to the next part,the installation,but i selected to import all of my WIndows VIsta default user data,after that i clicked forward and went to the installation process,i went down stairs to eat soemthing while it finishes,i came back and it was finished,it asked me to reboot so i clicked in Restart Now.
When it tried to boot,appeared an error saying: Error: no such devide found: #################### Grub load(or something like that) grub rescue: and it was a command line,since there i havent been able to boot into vista or Ubuntu,im really scared because is the first thing related to OS installing ive done,so i booted my USB and ran the trial and right now im trying to find out what to do from that trial version. I just went to the INSTALL UBUNTU 10.04 LTS application under the System>Administration Menu and found out that in the partition phase the Install and allow to select between both systems in eahc startup option,i dont know what to do,i foudn out that my HD has still all its data(MUsic/Videos/Folders/Programs/ect.)its just that i cannot boot from it. Also in GParted it appears as /dev/sda1/ and a warning icon besides it,also when i go into information, thers this warning there [URL]
1) Why would I create a new volume group to add a new hard drive to a system, rather than add the drive to an existing volume group?
2) If I created a new volume group and added a new hard drive to it, would I see the total free space (I see 30 GB now via the file browser)? For example, if I have 30 GB free on the main drive (with the OS), and I add a new drive of say 40 GB in a new volume group (using LVM) would I see 70 GB of free space? That doesn't seem to happen.
Prior to upgrading some of my hardware I had 4 drives used just as storage. Now I'm trying to mount the drives as an LVM but I don't have enough slots to connect all the drives at once now b/c they use an outdated type of cable. I can connect three of the four. So, can I somehow move these to a new group, or remove the missing drive from the existing group?The error is:Couldn't read all logical volumes for volume group VolGroup.Couldn't find device with uuid 'yQtrVB-5jCk-vF10-05c2-AcDL-GNn1-ivdxxh'.
I plan to install a server using LVM. I thought a partition schema where /boot would be in an ext4 partition while / /usr /var /home and /opt would be in the LVM. My question is: if I'm putting / into the LVM, is it necessary to divide /usr /var /home and /opt into different logical volumes? If I divide them, would it become harder to maintain when new disk space has to be added to the volume group?
Fedora 11. I am trying to setup kickstart so it lays out a mirrored volume group. I have 2 disks sda and sdb. I want a primary partition on each disk 200mb in size for /boot. This is to be mirrored onto raid device md0 (raid 1). The rest of each disk is to be setup partition which grows to use the remaining space, and is also mirrored (raid 1) md1. Onto md1, I want an LVM volume group called rootvg, and logical volumes set up on there for /, /home, /usr, /tmp etc. I can lay this out manually, and it works fine. However, the code below, which is slightly amended from a previous anaconda-ks.cfg file doesn't work.
Code: clearpart --linux --drives=sda,sdb --initlabel part raid.11 --asprimary --size=200 --ondisk=sda part raid.12 --grow --size=1 --ondisk=sda part raid.21 --asprimary --size=200 --ondisk=sdb part raid.22 --grow --size=1 --ondisk=sdb raid /boot --fstype=ext3 --level=RAID1 --device=md0 raid.11 raid.21 raid pv.1 --level=1 --device=md1 raid.12 raid.22
Ok so I have one drive. /boot /lv_root and /lv_swap
At the end of the drive I have 32 gigs of free space still contained in the logical volume group. I want to remove it from the LVG but this is on one device. Supposedly there is a way to do this, pvresize and fdisk.
[URL]
Quote:
Originally Posted by source
#I've tried to shrink the PV with pvresize which didn't throw errors -
Good.
#but fdisk still shows me the same LVM partition size as before.
That's normal. pvresize "just" updates the PV header and VG metadata.
#So I guess the partition table has to be modified somehow?
Yes. That was mentioned in my reply: "Then shrink the partition in the partition table."
You can use fdisk or any other partition table editor for this. Some don't support resizing a partition. In that case, you can delete and create a smaller one. If doing the delete/create dance, you *must* create the new partition on the same cylinder boundary as the current one to preserve the current data.
Ive read from every source on LVM its not possible to do this. Why on earth would any Linux developer put LVM on a single drive system by default? Were they even paying attention? I dont mean to go off on a rant but if there are multiple drives LVM makes sense. However if you only have one large drive LVM holds your system hostage and you have to crawl thru the pit of hell to get it back.
I understand you have a choice in the matter when you install Fedora but its really the worst possible choice for default. Many newcomers to Linux run into this problem with LVM. If you cannot resize LVG's the software should have never been put into a Linux distro in the first place.
I'm hoping, now that I've recovered my partition tables, how to rebuild my LVM volume group. The trouble is that one of the volumes lost its partition table, and after rebuilding the table, LVM can no longer identify the drive. I'm trying to rebuild the 'fileserver' volume group.
pvscan produces the following: Couldn't find device with uuid 'jsZAMq-LSa1-87Zb-WoGs-oi6v-u1As-h7YZMl'. PV /dev/sdc5 VG dev lvm2 [232.64 GiB / 0 free] PV unknown device VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] PV /dev/sda1 VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] Total: 3 [2.96 TiB] / in use: 3 [2.96 TiB] / in no VG: 0 [0 ]
extend the size of a LVM2 volume group over the remaining free space available on a physical volume. My linux box is a Ubuntu Karmic 9.10 64bit, the 60GB hard disk has 2 win partition for about 19GB, a 1.5GB ext3 boot partition and finally a 36GB LVM partition (/dev/sda4) on which I created a volume group (volgrp) smaller 10GB than the 36GB physical volume (/dev/sda4). What I want now is to extend the size of volume group up to the end of physical volume. I tried to use the "vgextend volgrp /dev/sda4" but system answers me with following output:
me@pc:~> sudo vgextend volgrp /dev/sda4 Physical volume '/dev/sda4' is already in volume group 'volgrp' Unable to add physical volume '/dev/sda4' to volume group 'volgrp'.
I have a relatively new server (Ubuntu Server 10.04.1) with a "/backup" partition on top of LVM on top of an MD raid 1.Everything generally works, except that it freezes during the fsck phase of bootup, with no errors. I've given it 20 minutes or so. If I press 's' to skip an unavailable mount (documented here), it reports that /backup could not be mounted.here are no LVM related messages in /var/log/messages, syslog, or dmesg.
When I try to mount /backup manually, it reports that the device (/dev/vg0/store) does not exist. Apparently the volume group was never activated, though all documentation seems to claim it should happen automatically at boot. When I run "vgchange vg0 -a y", it activates the volume group with no issue, and then I can mount /backup./etc/lvm/lvm.conf is unchanged from the defaults. I've seen posts mentioning the existence of a /etc/udev/rules.d/85-lvm2.rules , but no such file exists on my server, and I'm not sure how I would go about creating it manually, or forcing udev to create one.There are some open bugs describing similar problems, but surely it doesn't happen to everyone or there'd be many more[URL]