Installation :: FC10 - Volume Group "VOLGroup00" Not Found - Unable To Access Resume Device
Feb 2, 2009
I already know "what" my problem is, however I am having difficulty fixing it. I recently upgraded our companies server to a HP ML150; decided to upgrade to FC10 hoping it would go smooth and it is not. It does not detect the SATA drives after the installation.
"Volume Group "VOLGroup00" not found
Unable to access resume device (/dev/VolGroup00/LogVol01
mount: error mounting /dev/root on /sysroot as ext3: No such file or directory
I know the problem is that my SATA is not enabled in the kernel or grub, but I don't know how to fix this. My internet searches are coming up a little short and LIVE discs are not working so I am having trouble figuring this out.
Dual PII 400, 512Mb with a Promise SuperTrak 100 IDE Array Controller. At present I have only one drive on the controller, configured for 1 JBOD array. I install FC9 with no problem. New partition is created and formatted, Grub is installed, and then... Grub is found and booted, but then I get:
Reading all physical volumes. This may take a while... No volume groups found Volume group "VolGroup00" not found Unable to access resume device (/dev/VolGroup00/LogVol01) mount: could not find filesystem '/dev/root' I can boot in rescue mode, chroot to the installed system. I changed the kernel boot parm "root=/dev/VolGroup00/LogVol00"
I'm trying to move an existing FC10 install (created by someone else) from a 160GB WD1600AAJS SATA disk to a 160GB WD1600AAJB PATA disk (cursed trend of horizontally mounting SATA connectors at the end of the motherboard means the latest rev mobo doesn't fit in our enclosure!). I've used DD to copy the disk image from one to the other, but when attempting to boot, I get the following error:
Unable to access resume device (UUID=946f216f-0c24-4b02-a996-f42059970de7) mount: error mounting /dev/root on /sysroot as ext3: no such file or directory
That particular UUID maps to sda2, which is the swap partition. Interestingly, both the SATA and PATA disk come up as /dev/sda on the motherboard. I kind of grok that the UUIDs are substitutes for directly naming the disks, and that they're referred to in fstab, initrd-220.127.116.11-170.2.5.fc10.i686.img, and in /dev/disk/by-* I'm guessing the problem is that the GUIDs (at least the one for swap) are no longer the same. How are they assigned to the partitions during boot?
I tried doing
swapoff -a mkswap /dev/sda2 swapon -a
and put that new GUID in fstab and into initrd-* (using some steps I found elsewhere on how to gunzip/rezip it).At that point, I get a kernel panic on boot
Kernel panic - not syncing: CFS: Unable to mount root fs on unknown-block(0,0)
So I'm guessing that the other UUUDs have changed as well and I need to update them. How would I figure out what they are? I suppose I could change the references to /dev/sda*, but I didn't build this original image and I'm thinking whoever did had a good reason to go with UUIDs.
I have a dual boot system running FC8. I had a problem with an uninstaller on the Windows side that affected both the drives on the Windows and Linux sides. It froze my Linux archive drive. I removed the drive, and was able to boot the Windows side from a recovery disk, but not through Grub. I can't access Grub, but can start the Linux boot process with SuperGrub. The boot process gives the following errors:
No Volume groups found. Volume group "VolGroup00" not found. mount: could not find file system 'dev/root'.
This is my development virtual machine, I had Centos 5.3 on it and a lot of stuff I really can't afford to lose.
First off I'd like to say if there is any method anyone knows of recovering information off a VMWare disk, I'd love to trash it. I've already tried the drive mapping in VMWare workstation but only the boot partition comes up and I need /home and /etc.
I was screwing around with the LVM size of the disk and totally screwed it up. I can see the two partitions on /dev/sda, /dev/sda1 as the small partition and /dev/sda2 as my large data partition (the one I need access to).
If anyone could guide me into getting my data off or rebuilding my LVM to get it booted again that'd be amazing.
I'm running CentOS 5.5 with a hardware Areca controller (Raid 10). I initially installed a machine and rsynced the file system to a third machine; I'm then pulling in the file system to new machines and updating grub with a script to make it bootable.
I've done this before with other machines (no raid sets) but with these machines I'm running into errors code...
I have FC6 system with kernel 18.104.22.168-72.fc6 When I rebooted my system, I got error message "unable to access resume device (LABEL=SWAP-sda8) then it went to fsck automatically to all the partition and then stop (failed)
I have run into some serious problems trying to start up RHEL AS4. I am trying to install Oracle on this box which by the way is running as a guest OS through VirtualBox. I was running with it fine yesterday. I was doing the prerequisites of updating the /etc/sysctl.conf file with additional kernel parameters. Prior to that I added another scsi virtual disk and extended a PV and left it at that for the night. I am since come back today and trying booting and am getting the following errors:
Volume group "VolGroup00" not found; Couldn't find all physical volumes for volume group VolGroup00; Couldn't find device with UUID "Some UUID of the device"; ERROR: /bin/lvm exited abnormally! (pid 318); mount: error 6 mounting ext3; mount: error 2 mounting none; switchroot: mount failed: 22 umount /initrd/dev failed: Z kernel panic - not syncing: Attempted to kill init!
I have tried running in rescue mode and it can find volume group VolGroup00. I also have installed Storage Foundation which includes veritas volume manager and other various veritas components. Does anyone have a clue what I am supposed to do here to get RHEL up and running again? I am running kernel version 2.6.9-42.EL if that helps as well.
Before creating this topic I googled a lot and found lots of forum topics and blog posts with similar problem. But that did not help me to fix it. So, I decided to describe it here. I have a virtual machine with CentOS 5.5 and it was working like a charm. But then I turned it off to make a backup copy of this virtual machine and after that it has a boot problem. If I just turn it on, it shows the following error message:
Activating logical volumes Volume group "VolGroup00" not found Trying to resume from /dev/VolGroup00/LogVol01 Unable to access resume device (/dev/VolGroup00/LogVol01) ... Kernel panic ...! During the reboot I can see 3 kernels and if I select the 2nd one the virtual machine starts fine, it founds the volume group etc. (But there is also a problem - it can not connect the network adapters.) So, it is not possible to boot it with the newest kernel (2.6.18-194.17.1.el5), but it is possible with an older one (2.6.18-194.11...)
I looked into GRUB's menu.lst and it seems to be fine. I also tried #mkinitrd /boot/initrd-2.6.18-92.el5.img 2.6.18-92.el5 no luck! Yes, I can insert DVD .iso and boot from it in "linux rescue" mode.
This is a problem about linux-kernel-3.16-0-4-amd64 and LVM, I guess. I decided to write this here in case other users who installed their debian system with encryption enabled experience this problem with a recent kernel upgrade.
I use debian jessie. Today I gave the command:
Code: Select allapt-get upgrade
There was a linux-kernel upgrade to 3.16-0-4-amd64 among other packages to be upgraded.
After this upgrade my computer cannot boot anymore.
I get following error:
Code: Select allVolume group "ert-debian-vg" not found. Skipping volume group "ert-debian-vg" Unable to find LVM "volume ert-debian-vg/root" Volume group "ert-debian-vg" not found Skipping volume group "ert-debian-vg" Unable to find LVM "volume ert-debian-vg/swap_1" Please unlock disk sd3_crypt:
And it does not accept my password.
I used rescue environment on debian jessie netinst iso and decrypted the partition and took a backup of my /home. Now I have not much to lose if I reinstall my system but I still want to fix this problem if possible.
I have reinstalled the kernel using debian jessie netinst rescue iso but nothing changed.
I have Timeshift snapshots located at /home/Timeshift but timeshift --rescue command cannot find a backup device, it sees the device as crypted. If I could restore a snapshot it would be very easy to go back in time and get rid of this problem. It would not be a real solution, however.
There is not any old kernel option in GRUB menu. So removing the latest one does not seem as an option.
I am trying to extend my / size as its full. Well the volume group is VolGroup00 & logical volume is LogVol00 but when. I run the command vgextend VolGroup00 /dev/sda8. It says volume group not found. Can it be because I have WindowsXP in my /dev/sda1, which falls under same Volgroup??
I'm hoping, now that I've recovered my partition tables, how to rebuild my LVM volume group. The trouble is that one of the volumes lost its partition table, and after rebuilding the table, LVM can no longer identify the drive. I'm trying to rebuild the 'fileserver' volume group.
pvscan produces the following: Couldn't find device with uuid 'jsZAMq-LSa1-87Zb-WoGs-oi6v-u1As-h7YZMl'. PV /dev/sdc5 VG dev lvm2 [232.64 GiB / 0 free] PV unknown device VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] PV /dev/sda1 VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] Total: 3 [2.96 TiB] / in use: 3 [2.96 TiB] / in no VG: 0 [0 ]
I wanted to take a current system with a working CentOS 4.7 LVM and move it to a 3ware 9650SE raid card so that I can mirror the drive and have a RAID1. The current system has 2 hard drives. One hard drive contains the CentOS 4.7 LVM install and the second drive is not part of the LVM. The second drive is being used for basic backups. I took a copy of the OS HD using dd and confirmed it worked by booting it up with the hard drive on the motherboard. When I connect it to the 3ware raid card with a new hard drive for the RAID1 and configure the 3ware raid card for RAID1, I get the "Volume Group "VolGroup00" not found" error. It gets past grub and then starts to load up but that is when the error occurs.
The error message exactly is: Making device-mapper control node Scanning logical volumes Reading all physical volumes. This may take a while... No volume groups found Activating logical volumes Volume group "VolGroup00" not found ERROR: /bin/lvm exited abnormally! (pid 461) Creating root device Mounting root filesystem mount: error 6 mounting ext3 mount: error 2 mounting none Switching to new root switchroot: mount failed: 22 umount /initrd/dev failed: 2 Kernel panic - not syncing: Attempted to kill init!
I don't know much about lvm and I've managed to screw up a drive. I had a 500GB drive with FC14 on it and I wanted to copy over a load of data to my new 1TB that was replacing it. I set up my new install the same way as the old...including the same volume names (error number 1 I think) I successfully mounted the old/500GB drive (using vgscan and vgchange -a y etc.) using a laptop (running FC13) and an external hdd cradle. I could access the files I wanted but this wasn't the machine I wanted to copy them to (I was doing this while waiting for the install to finish on the new drive).
When I tried the same process on the new install I found that having two lvm with the same name meant I couldn't mount the external one. So I opened the disk utility (palimsest) and was going to change the name of the old volume group but it wouldn't let me do that. I then thought maybe I could get away with just changing the name of the partition where the files were and maybe I could add it to the mounted group or something so I changed it to lv_home2. This changed the name of my new/1TB lv_home to lv_home2 as well. So thinking that wasn't the answer I just changed the name of the new lv_home2 back to lv_home.
From that point on I haven't been able to see the old drives partitions (the new volume group still works so far). I has a physical volume but the volume group and volume names are gone from view. When I try to vgscan on my main computer or the laptop I had it working on earlier I get:
I'm rearranging a bunch of disks on my server at home and I find myself in the position of wanting to move a bunch of LVM logical volumes to another volume group. Is there a simple way to do this? I saw mention of a cplv command but this seems to be either old or not something that was ever available for linux.
I am not well versed in using Ubuntu. I was using Win XP Pro SP3 on my old machine. I decided to make the move to Ubuntu. No problem since all files on the primary drive of the old machine were just OS related. I kept all of the important stuff (music, documents, reg codes, etc.) The computer had no problem importing the songs on the external drive to the music library. I am playing songs and loving it. Now the problem... when I am starting with ubuntu I don't find the drive and I can't mount it.
I get a box stating: You are not privileged to mount this volume DBus error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
Fedora 11. I am trying to setup kickstart so it lays out a mirrored volume group. I have 2 disks sda and sdb. I want a primary partition on each disk 200mb in size for /boot. This is to be mirrored onto raid device md0 (raid 1). The rest of each disk is to be setup partition which grows to use the remaining space, and is also mirrored (raid 1) md1. Onto md1, I want an LVM volume group called rootvg, and logical volumes set up on there for /, /home, /usr, /tmp etc. I can lay this out manually, and it works fine. However, the code below, which is slightly amended from a previous anaconda-ks.cfg file doesn't work.
Code: clearpart --linux --drives=sda,sdb --initlabel part raid.11 --asprimary --size=200 --ondisk=sda part raid.12 --grow --size=1 --ondisk=sda part raid.21 --asprimary --size=200 --ondisk=sdb part raid.22 --grow --size=1 --ondisk=sdb raid /boot --fstype=ext3 --level=RAID1 --device=md0 raid.11 raid.21 raid pv.1 --level=1 --device=md1 raid.12 raid.22
I have a home network with 2 WINXP machines and one I am trying to load FC10 on. If I get FC10 to work, I will convert the others to FC10. I am trying to access the WINXP machines through my LAN. I have installed samba. I have gone through a lot of places to try to configure samba. Several places say I have to set the security level on the ethernet card using system-config-securitylevel. When I enter this in terminal, I get command not found. I checked with yum and it is installed. I have tried using the menus but cannot find it anywhere. I can see the FC workgroup from one of the WINXP machines but I cannot look into it. It says I don't have the right permissions. I would like to set it so I don't need a userid/password at all. Can I do this?
I must say, the RPMfusion version of the Nvdia driver package work better with FC10 than they did under FC9 for me, which is to say: they work at all. Still get corrupted output after a few hours of work, but I imagine that's just the underlying weakness of the Nvidia driver bubbling up. Either way, now I have other problems:
First off, the Intel driver for my onboard video setup has stopped working entirely. The attempt to bring up X on the Intel setup gets me to the KDE login screen and locks the machine up hard (poweroff required). A look at the xorg.0.log shows that the server seems to be stuck in a loop trying to determine the valid video properties for the session.
Here's my xorg.conf, BTW. I have it setup to perform different configurations depending on which DefaultServerLayout line gets uncommented (lines 4 - 6). If anybody has a better way of doing this, I'm all ears. While we're on my xorg.conf, pleas view lines 64-67 and/or 75-77. This is my quickie way of changing drivers (again, all ears) which brings us to the next problem, neither the nv or nouveau drivers work with my system. Interestingly enough, both drivers tend to fail in the same relative spot in the process, I think. Please refer to line 207 of this xorg.0.log with the nv driver or line 140 of this xorg.0.log from the nouveau setup. Either way, both sessions die with the same message: (EE) Screen(s) found, but none have a usable configuration. Fatal server error: no screens found Gotta go, x session getting corrupt,,,
I upgraded to fedora 10 using yum but i have a problem, the installation went fine and after rebooting i got a login screen. I typed in my user name and password and after a second it returns to the login screen. Im sure the username and password are correct. If i type in another password i get an error message. So how can i log in?
This is what i did to upgrade to Fc10:
Then i got an upgrade wizzard and selected Fedora 10 and than reboot.
In the grub menu only one kernel is present which is the Fc10 so couldn't use the old one.
I've tried to edit the kernel with single for single user that works gives me a command line but im just not experienced enough to figure out how i can use the command line to login.
Trying to install FC10 on an older Dell GX150 machine, and I get to the point where the graphical install app loads, but I am not able to see any text on the screen, other than the back/next buttons, and when I click on next, I can see that it goes to a new screen where it looks like it's asking for some text input, but I can't decipher what it's asking for, and if I try to type in the box, nothing happens.
I think I read somewhere on here about using a different boot option to get FC10 to work in a Virtual PC, but not sure if it would apply here (and can't seem to find that thread anyway!) I am booting off the FC10 DVD, and can get to the menu to install/upgrade, rescue, boot locally, or test memory. If I hit TAB, I can edit the boot option - I just need to know what to put in there to get this to work!
There are 2 volumes on single group. The boot partion is a physical volume and the system is a logical volume. The disk has more room up to 40GB. How can I extend the logical volume. Tried system-config-lvm, but it does not gives the option.
I have a Dell Vostro 3300, i5 460 processor with a NVIDIA 310M Graphics Card. I'm doing a KUBUNTU 10.10 (Maverick) install with the following results. The Live CD boots just fine to, "TEST," or "Install." Installation goes fine. However, the graphics card being used is the Intel i915. I have tried installing the NVIDIA drivers directly from the, "Additional Drivers," tool and after the reboot I get through the boot screen to the console. I try to manually startx and I get the errors, "no device found," "no screens found." The second install I tried purging and blacklisting the nouveau drivers and entering safe mode. Then using apt-get install nvidia-current. After that, nvidia-xconfig. Same results.
The third attempt I re-installed and this time downloaded the drivers from the Nvidia site (version 256.53). Blacklisted nouveau, remove all nvidia, updated initramfs, etc. The install went fine however I still end up at a console after boot with the same messages as above. No device found, no screens found. I've tried searching through the forum and web and have tried things like adding the modset option along with many other hacks, tips and fixes. still, no go.I can live with the Intel graphics for now although I lose 512MB of memory. Unfortunately there is no way to disable or change this set-up in the BIOS. I've seen quite a few bug reports at Launchpad:
1. Is this something I should just wait til a fix comes? Will a fix come? 2. Is there, or will there be an official Updated Ubuntu Guide for Maverick to install NVIDIA drivers with this tecnology? 3. Lastly, is there anything else I should try??
I just (for the first time ever) installed a version of Ubuntu. It is 10.04. I installed off of the Live Disk. I was having a great time until the first time I went to boot into it and I got the message "Error: No such device: "long number" Grub Rescue> "
I have one HD and a VG spanning it's entirety. I resized a LV and freed up 10GB within the VG, but I want the 10GB outside the boundary of the VG so I can install another OS for testing purposes. For some reason I'm not able to do this. I don't know if I understand LVMs correctly. Maybe there can be only one VG on a HD?