This is a problem about linux-kernel-3.16-0-4-amd64 and LVM, I guess. I decided to write this here in case other users who installed their debian system with encryption enabled experience this problem with a recent kernel upgrade.
I use debian jessie. Today I gave the command:
Code: Select allapt-get upgrade
There was a linux-kernel upgrade to 3.16-0-4-amd64 among other packages to be upgraded.
After this upgrade my computer cannot boot anymore.
I get following error:
Code: Select allVolume group "ert-debian-vg" not found. Skipping volume group "ert-debian-vg" Unable to find LVM "volume ert-debian-vg/root" Volume group "ert-debian-vg" not found Skipping volume group "ert-debian-vg" Unable to find LVM "volume ert-debian-vg/swap_1" Please unlock disk sd3_crypt:
And it does not accept my password.
I used rescue environment on debian jessie netinst iso and decrypted the partition and took a backup of my /home. Now I have not much to lose if I reinstall my system but I still want to fix this problem if possible.
I have reinstalled the kernel using debian jessie netinst rescue iso but nothing changed.
I have Timeshift snapshots located at /home/Timeshift but timeshift --rescue command cannot find a backup device, it sees the device as crypted. If I could restore a snapshot it would be very easy to go back in time and get rid of this problem. It would not be a real solution, however.
There is not any old kernel option in GRUB menu. So removing the latest one does not seem as an option.
I don't know much about lvm and I've managed to screw up a drive. I had a 500GB drive with FC14 on it and I wanted to copy over a load of data to my new 1TB that was replacing it. I set up my new install the same way as the old...including the same volume names (error number 1 I think) I successfully mounted the old/500GB drive (using vgscan and vgchange -a y etc.) using a laptop (running FC13) and an external hdd cradle. I could access the files I wanted but this wasn't the machine I wanted to copy them to (I was doing this while waiting for the install to finish on the new drive).
When I tried the same process on the new install I found that having two lvm with the same name meant I couldn't mount the external one. So I opened the disk utility (palimsest) and was going to change the name of the old volume group but it wouldn't let me do that. I then thought maybe I could get away with just changing the name of the partition where the files were and maybe I could add it to the mounted group or something so I changed it to lv_home2. This changed the name of my new/1TB lv_home to lv_home2 as well. So thinking that wasn't the answer I just changed the name of the new lv_home2 back to lv_home.
From that point on I haven't been able to see the old drives partitions (the new volume group still works so far). I has a physical volume but the volume group and volume names are gone from view. When I try to vgscan on my main computer or the laptop I had it working on earlier I get:
I'm rearranging a bunch of disks on my server at home and I find myself in the position of wanting to move a bunch of LVM logical volumes to another volume group. Is there a simple way to do this? I saw mention of a cplv command but this seems to be either old or not something that was ever available for linux.
I have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
1- I want to encrypt my system drive 2 - I want to make my system drive a RAID 1 3 - I have two 1 T hard drives, but I would like to be able to add two more 1 T hard drives for a total of 4 1T drives in the future. How do I do that?
I am using ubuntu 10, desktop edition. I have not added the OS yet, I am building this weekend.
Dual PII 400, 512Mb with a Promise SuperTrak 100 IDE Array Controller. At present I have only one drive on the controller, configured for 1 JBOD array. I install FC9 with no problem. New partition is created and formatted, Grub is installed, and then... Grub is found and booted, but then I get:
Reading all physical volumes. This may take a while... No volume groups found Volume group "VolGroup00" not found Unable to access resume device (/dev/VolGroup00/LogVol01) mount: could not find filesystem '/dev/root' I can boot in rescue mode, chroot to the installed system. I changed the kernel boot parm "root=/dev/VolGroup00/LogVol00"
I have one HD and a VG spanning it's entirety. I resized a LV and freed up 10GB within the VG, but I want the 10GB outside the boundary of the VG so I can install another OS for testing purposes. For some reason I'm not able to do this. I don't know if I understand LVMs correctly. Maybe there can be only one VG on a HD?
i have created two physical volumes, later added volume group to it and then created logical volume and formated the logical volume n mounted it on directory now now i wanted to split the volume group but am unable to do it.If i tries it error msg displayed as existing volume group is active and i have to inactive that volume group
I am not familiar with LVM at all, although I have successfully got it up and running in Slackware. What I would like to know is, could I create one Volume Group in a Physical Volume consisting at the moment of just one disk, and install separate Linux releases into Logical Volumes in this solitary VG? So, for example:
I have RHEL3 on of my old system.the OS is not start because the Volume group is not exist. The single mode is not work either.the onlly shell i have is by rescue mode.and either the sysimage is not exist!should i use vgimport command?how ?I never use this command before?!?! Part of the error that i received during start the OS is :
vgscan --ERROR "vg_read_with_pv_and_lv():current PV cant get data of volume group "vg00" from physical volume(s) vgchange -- no volume groups found
I'm running debian and used mdadm to setup up a raid 6 array with 4x1TB drives with roughly 1.86TB's available with lvm. Then I added 4x1TB drives to the array. So now I have an 8 drive raid 6 array with 5.+TB's available, the array sees all available space. The question is how do I extend the volume group so that it uses the whole raid and not just half of it. As of right now the volume group is only 1.86TB's.
I have run into some serious problems trying to start up RHEL AS4. I am trying to install Oracle on this box which by the way is running as a guest OS through VirtualBox. I was running with it fine yesterday. I was doing the prerequisites of updating the /etc/sysctl.conf file with additional kernel parameters. Prior to that I added another scsi virtual disk and extended a PV and left it at that for the night. I am since come back today and trying booting and am getting the following errors:
Volume group "VolGroup00" not found; Couldn't find all physical volumes for volume group VolGroup00; Couldn't find device with UUID "Some UUID of the device"; ERROR: /bin/lvm exited abnormally! (pid 318); mount: error 6 mounting ext3; mount: error 2 mounting none; switchroot: mount failed: 22 umount /initrd/dev failed: Z kernel panic - not syncing: Attempted to kill init!
I have tried running in rescue mode and it can find volume group VolGroup00. I also have installed Storage Foundation which includes veritas volume manager and other various veritas components. Does anyone have a clue what I am supposed to do here to get RHEL up and running again? I am running kernel version 2.6.9-42.EL if that helps as well.
Before creating this topic I googled a lot and found lots of forum topics and blog posts with similar problem. But that did not help me to fix it. So, I decided to describe it here. I have a virtual machine with CentOS 5.5 and it was working like a charm. But then I turned it off to make a backup copy of this virtual machine and after that it has a boot problem. If I just turn it on, it shows the following error message:
Activating logical volumes Volume group "VolGroup00" not found Trying to resume from /dev/VolGroup00/LogVol01 Unable to access resume device (/dev/VolGroup00/LogVol01) ... Kernel panic ...! During the reboot I can see 3 kernels and if I select the 2nd one the virtual machine starts fine, it founds the volume group etc. (But there is also a problem - it can not connect the network adapters.) So, it is not possible to boot it with the newest kernel (2.6.18-194.17.1.el5), but it is possible with an older one (2.6.18-194.11...)
I looked into GRUB's menu.lst and it seems to be fine. I also tried #mkinitrd /boot/initrd-2.6.18-92.el5.img 2.6.18-92.el5 no luck! Yes, I can insert DVD .iso and boot from it in "linux rescue" mode.
I'm trying to do a disk upgrade on some servers. They are using LVM with DRBD on top and each LVM volume contains a Xen image. I have already created identical volumes on another volume group, copied the data and pointed DRBD to the new source (Which seems to have worked).
What I am unsure of is how to safely remove the disks. The disks are an Areca Raid 1 array and support hotswap. Can I just pull them out of the machine or is some sort of command needed to tell LVM or the kernel to disconnect from the physical array device? Is removing the raid array from the Areca management GUI first a good idea?
1) Why would I create a new volume group to add a new hard drive to a system, rather than add the drive to an existing volume group?
2) If I created a new volume group and added a new hard drive to it, would I see the total free space (I see 30 GB now via the file browser)? For example, if I have 30 GB free on the main drive (with the OS), and I add a new drive of say 40 GB in a new volume group (using LVM) would I see 70 GB of free space? That doesn't seem to happen.
Prior to upgrading some of my hardware I had 4 drives used just as storage. Now I'm trying to mount the drives as an LVM but I don't have enough slots to connect all the drives at once now b/c they use an outdated type of cable. I can connect three of the four. So, can I somehow move these to a new group, or remove the missing drive from the existing group?The error is:Couldn't read all logical volumes for volume group VolGroup.Couldn't find device with uuid 'yQtrVB-5jCk-vF10-05c2-AcDL-GNn1-ivdxxh'.
I plan to install a server using LVM. I thought a partition schema where /boot would be in an ext4 partition while / /usr /var /home and /opt would be in the LVM. My question is: if I'm putting / into the LVM, is it necessary to divide /usr /var /home and /opt into different logical volumes? If I divide them, would it become harder to maintain when new disk space has to be added to the volume group?
Fedora 11. I am trying to setup kickstart so it lays out a mirrored volume group. I have 2 disks sda and sdb. I want a primary partition on each disk 200mb in size for /boot. This is to be mirrored onto raid device md0 (raid 1). The rest of each disk is to be setup partition which grows to use the remaining space, and is also mirrored (raid 1) md1. Onto md1, I want an LVM volume group called rootvg, and logical volumes set up on there for /, /home, /usr, /tmp etc. I can lay this out manually, and it works fine. However, the code below, which is slightly amended from a previous anaconda-ks.cfg file doesn't work.
Code: clearpart --linux --drives=sda,sdb --initlabel part raid.11 --asprimary --size=200 --ondisk=sda part raid.12 --grow --size=1 --ondisk=sda part raid.21 --asprimary --size=200 --ondisk=sdb part raid.22 --grow --size=1 --ondisk=sdb raid /boot --fstype=ext3 --level=RAID1 --device=md0 raid.11 raid.21 raid pv.1 --level=1 --device=md1 raid.12 raid.22
Ok so I have one drive. /boot /lv_root and /lv_swap
At the end of the drive I have 32 gigs of free space still contained in the logical volume group. I want to remove it from the LVG but this is on one device. Supposedly there is a way to do this, pvresize and fdisk.
Originally Posted by source
#I've tried to shrink the PV with pvresize which didn't throw errors -
#but fdisk still shows me the same LVM partition size as before.
That's normal. pvresize "just" updates the PV header and VG metadata.
#So I guess the partition table has to be modified somehow?
Yes. That was mentioned in my reply: "Then shrink the partition in the partition table."
You can use fdisk or any other partition table editor for this. Some don't support resizing a partition. In that case, you can delete and create a smaller one. If doing the delete/create dance, you *must* create the new partition on the same cylinder boundary as the current one to preserve the current data.
Ive read from every source on LVM its not possible to do this. Why on earth would any Linux developer put LVM on a single drive system by default? Were they even paying attention? I dont mean to go off on a rant but if there are multiple drives LVM makes sense. However if you only have one large drive LVM holds your system hostage and you have to crawl thru the pit of hell to get it back.
I understand you have a choice in the matter when you install Fedora but its really the worst possible choice for default. Many newcomers to Linux run into this problem with LVM. If you cannot resize LVG's the software should have never been put into a Linux distro in the first place.
I'm hoping, now that I've recovered my partition tables, how to rebuild my LVM volume group. The trouble is that one of the volumes lost its partition table, and after rebuilding the table, LVM can no longer identify the drive. I'm trying to rebuild the 'fileserver' volume group.
pvscan produces the following: Couldn't find device with uuid 'jsZAMq-LSa1-87Zb-WoGs-oi6v-u1As-h7YZMl'. PV /dev/sdc5 VG dev lvm2 [232.64 GiB / 0 free] PV unknown device VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] PV /dev/sda1 VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] Total: 3 [2.96 TiB] / in use: 3 [2.96 TiB] / in no VG: 0 [0 ]
I am trying to extend my / size as its full. Well the volume group is VolGroup00 & logical volume is LogVol00 but when. I run the command vgextend VolGroup00 /dev/sda8. It says volume group not found. Can it be because I have WindowsXP in my /dev/sda1, which falls under same Volgroup??
extend the size of a LVM2 volume group over the remaining free space available on a physical volume. My linux box is a Ubuntu Karmic 9.10 64bit, the 60GB hard disk has 2 win partition for about 19GB, a 1.5GB ext3 boot partition and finally a 36GB LVM partition (/dev/sda4) on which I created a volume group (volgrp) smaller 10GB than the 36GB physical volume (/dev/sda4). What I want now is to extend the size of volume group up to the end of physical volume. I tried to use the "vgextend volgrp /dev/sda4" but system answers me with following output:
me@pc:~> sudo vgextend volgrp /dev/sda4 Physical volume '/dev/sda4' is already in volume group 'volgrp' Unable to add physical volume '/dev/sda4' to volume group 'volgrp'.
I have a relatively new server (Ubuntu Server 10.04.1) with a "/backup" partition on top of LVM on top of an MD raid 1.Everything generally works, except that it freezes during the fsck phase of bootup, with no errors. I've given it 20 minutes or so. If I press 's' to skip an unavailable mount (documented here), it reports that /backup could not be mounted.here are no LVM related messages in /var/log/messages, syslog, or dmesg.
When I try to mount /backup manually, it reports that the device (/dev/vg0/store) does not exist. Apparently the volume group was never activated, though all documentation seems to claim it should happen automatically at boot. When I run "vgchange vg0 -a y", it activates the volume group with no issue, and then I can mount /backup./etc/lvm/lvm.conf is unchanged from the defaults. I've seen posts mentioning the existence of a /etc/udev/rules.d/85-lvm2.rules , but no such file exists on my server, and I'm not sure how I would go about creating it manually, or forcing udev to create one.There are some open bugs describing similar problems, but surely it doesn't happen to everyone or there'd be many more[URL]
I have a CentOS 5.4 storage server with LVM/RAID1 setup, I need to change the motherboard from Gigabyte GA-G31M-ES2C to Asus P5K-E. After the switch, I got kernel panic during boot (the following is copied manually from the screen, is these stuff recorded somewhere in the log? I checked /var/log/messeges but didn't find anything):
Scanning and configuring dmraid support md: Autodetecting RAID arrays md: autorun.... md: ... autorun DONE
I used CentOS CDROM and entered rescue mode to take a look, some findings:
1. Initially the drive label is different from my old system, for example the old sdb became sdc. I switched the SATA cable around to get this sorted out except one IDE drive (originally hda, now hde), but this drive is not used in RAID
2. After the label is fixed, I used mdadm to check RAID1 devices (md0/1/2/3), it seems each device is missing a drive (originally md0 is created by sda1+sdb1, now it's just sdb1, and thus degraded). I used mdadm to re-add the missing drive, mdadm --detail shows everything is fine now.
3. Step 2 didn't fix the error, so I "chroot /mnt/sysimage/" and took a look around, all the directories seem fine, include the ones on "main" volume group. vgscan/vgdisplay/lvdisplay shows all the vg and lv as they should be.