General :: Move / Copy A Logical Volume From One Volume Group To Another?
Dec 1, 2010
I'm rearranging a bunch of disks on my server at home and I find myself in the position of wanting to move a bunch of LVM logical volumes to another volume group. Is there a simple way to do this? I saw mention of a cplv command but this seems to be either old or not something that was ever available for linux.
Ok so I have one drive. /boot /lv_root and /lv_swap
At the end of the drive I have 32 gigs of free space still contained in the logical volume group. I want to remove it from the LVG but this is on one device. Supposedly there is a way to do this, pvresize and fdisk.
Originally Posted by source
#I've tried to shrink the PV with pvresize which didn't throw errors -
#but fdisk still shows me the same LVM partition size as before.
That's normal. pvresize "just" updates the PV header and VG metadata.
#So I guess the partition table has to be modified somehow?
Yes. That was mentioned in my reply: "Then shrink the partition in the partition table."
You can use fdisk or any other partition table editor for this. Some don't support resizing a partition. In that case, you can delete and create a smaller one. If doing the delete/create dance, you *must* create the new partition on the same cylinder boundary as the current one to preserve the current data.
Ive read from every source on LVM its not possible to do this. Why on earth would any Linux developer put LVM on a single drive system by default? Were they even paying attention? I dont mean to go off on a rant but if there are multiple drives LVM makes sense. However if you only have one large drive LVM holds your system hostage and you have to crawl thru the pit of hell to get it back.
I understand you have a choice in the matter when you install Fedora but its really the worst possible choice for default. Many newcomers to Linux run into this problem with LVM. If you cannot resize LVG's the software should have never been put into a Linux distro in the first place.
I have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
I don't know much about lvm and I've managed to screw up a drive. I had a 500GB drive with FC14 on it and I wanted to copy over a load of data to my new 1TB that was replacing it. I set up my new install the same way as the old...including the same volume names (error number 1 I think) I successfully mounted the old/500GB drive (using vgscan and vgchange -a y etc.) using a laptop (running FC13) and an external hdd cradle. I could access the files I wanted but this wasn't the machine I wanted to copy them to (I was doing this while waiting for the install to finish on the new drive).
When I tried the same process on the new install I found that having two lvm with the same name meant I couldn't mount the external one. So I opened the disk utility (palimsest) and was going to change the name of the old volume group but it wouldn't let me do that. I then thought maybe I could get away with just changing the name of the partition where the files were and maybe I could add it to the mounted group or something so I changed it to lv_home2. This changed the name of my new/1TB lv_home to lv_home2 as well. So thinking that wasn't the answer I just changed the name of the new lv_home2 back to lv_home.
From that point on I haven't been able to see the old drives partitions (the new volume group still works so far). I has a physical volume but the volume group and volume names are gone from view. When I try to vgscan on my main computer or the laptop I had it working on earlier I get:
so i have f12 installed on my hd with lvm using the whole extent of the HD , i want to reduce it so i can dual boot it with a windows system, i managed to reduce the logical volume to free some space, but i cant seem to reduce the physical volume, is this possible and how ?
Hi. We have a cluster consisting of 10 logical volumes all part of one filesystem. Is there a way to know which logical volume owns a certain file/inode? I have tried what is suggested at this link, but the output is the filesystem and not a specific logical volume.
Dual PII 400, 512Mb with a Promise SuperTrak 100 IDE Array Controller. At present I have only one drive on the controller, configured for 1 JBOD array. I install FC9 with no problem. New partition is created and formatted, Grub is installed, and then... Grub is found and booted, but then I get:
Reading all physical volumes. This may take a while... No volume groups found Volume group "VolGroup00" not found Unable to access resume device (/dev/VolGroup00/LogVol01) mount: could not find filesystem '/dev/root' I can boot in rescue mode, chroot to the installed system. I changed the kernel boot parm "root=/dev/VolGroup00/LogVol00"
I recently resized one of my Logical Volumes that contained 160MB data from 500MB to 6.5GB. After resizing it, I checked the size of the data via 'du -sh' and found that my data had reduced to 143MB.
Fortunately, I backed up the the 160MB of data on another partition before resizing the Logical Volume. I ran 'diff' on both directories holding the 160MB and 143MB, but there was no difference detected.
how come there is a 17MB difference after resizing?
In case you're wondering how I performed my resize, this is what i did:
Back in the day, I foolishly installed my Fedora server with the default logical volume layout on one physical volume. Knowing now that this is a huge waste of space (partition is large) I'd like to reduce the logical volume and somehow detach this now unused space and mount as a normal partition. Is this possible? Only 20GB of the 160GB has been used for the OS. Home partition is on a secondary disk.
I am installing Debian for the very first time and having read websites similar to [url] I have come across parts of the installation which I do not understand.
For example, I have created logical volumes using the logical volume manager however am unclear what the message regarding writing changes to disk before configuring Logical Volume Manager means.
Once I have created the volume group, I am presented with a window that provides me with the ability to
Display configuration details Create volume groups Create logical volume Delete logical volume Extend volume group
Option 2 is pretty self-explanatory however am unsure whether it is advisable to segment directories between 2 or more volume groups. What benefits does it serve?
Option 5 provides me to extend a volume group however am unsure how this works? Does it mean I can assign free space available one 1 physical drive to the existing volume group or does it mean I can assign free space available on a second phyical drive or does it mean both? How does it affect security, performance, etc?
Currently the only way I can see the logical volumes I have created by selection Option 4. Is there any other way? How do most people keep track of the logical volumes they have created e.g. checking off against a checklist, etc?
Next I have the ability to map the logical volumes to mount points however am confused what purpose the none mount point serves as I have the option to select it?
What are mount options for?
What do I use labels for?
What are reserved blocks for?
What does typical usage refer to?
How does the option to copy data from another partition work? What is it for?
I have a RHEL4 system with 2 250GB physical volumes. There is a boot partition that is outside LVM and 2 logical volumes (swap and root) within a single volume group. This volume group bridges the 2 physical volumes.
I would like to clone this system onto a single 1 TB physical volume that will replace the 2x250GB currently in use.
I am very new to LVM, as well as not especially experienced at linux, and have some questions about how lvm works. A few months back I set up a server running FC10 and tried creating Logical Groups during the the initial setup. We've realized that we are not using all the available space on the physical drive, and I realized that for some reason (I'm thinking this might have been the default?), we initially created two Logical Groups (VolGroup00 and VolGroup01) and it appears two Logical volumes in each (LogVol00 and LogVol01). LogVol00 in VolGroup00 is mapped to /, and the other Group was actually unused. I figure that it would be simplest to just use all this space mapped to /, so I thought the thing to do would be to simply merge VolGroup01 to VolGroup00. I tried this:
[root@office mapper]# vgmerge VolGroup00 VolGroup01 Logical volumes in "VolGroup01" must be inactive
So after a bit of research, I tried this:
[root@office mapper]# vgchange -a n VolGroup01 Can't deactivate volume group "VolGroup01" with 1 open logical volume(s)
So apparently There's an open volume, but I don't know how to go about closing it. I removed the LogVol00 from that group, but LogVol01 won't budge.
[root@office mapper]# lvremove VolGroup01 Can't remove open logical volume "LogVol01"
So how do I go about closing this Volume? At one point, there was some output that told me LogVol01 was being used as swap space. How do I handle that?
i have created two physical volumes, later added volume group to it and then created logical volume and formated the logical volume n mounted it on directory now now i wanted to split the volume group but am unable to do it.If i tries it error msg displayed as existing volume group is active and i have to inactive that volume group
how easy it would be to read the contents of a physical disk that was part of a larger logical volume. The disk contains a "Linux LVM" partition that spans its entire size. My problem is that one of my disks died, and I have to send it back for a warranty replacement. However, the disk is dead, and I can't zero it out. I'm just trying to assess how difficult it would be (or at least how likely it would be) for a tech that's checking out the disk to get at the data.
I am not familiar with LVM at all, although I have successfully got it up and running in Slackware. What I would like to know is, could I create one Volume Group in a Physical Volume consisting at the moment of just one disk, and install separate Linux releases into Logical Volumes in this solitary VG? So, for example:
I have RHEL3 on of my old system.the OS is not start because the Volume group is not exist. The single mode is not work either.the onlly shell i have is by rescue mode.and either the sysimage is not exist!should i use vgimport command?how ?I never use this command before?!?! Part of the error that i received during start the OS is :
vgscan --ERROR "vg_read_with_pv_and_lv():current PV cant get data of volume group "vg00" from physical volume(s) vgchange -- no volume groups found
Prior to upgrading some of my hardware I had 4 drives used just as storage. Now I'm trying to mount the drives as an LVM but I don't have enough slots to connect all the drives at once now b/c they use an outdated type of cable. I can connect three of the four. So, can I somehow move these to a new group, or remove the missing drive from the existing group?The error is:Couldn't read all logical volumes for volume group VolGroup.Couldn't find device with uuid 'yQtrVB-5jCk-vF10-05c2-AcDL-GNn1-ivdxxh'.
I plan to install a server using LVM. I thought a partition schema where /boot would be in an ext4 partition while / /usr /var /home and /opt would be in the LVM. My question is: if I'm putting / into the LVM, is it necessary to divide /usr /var /home and /opt into different logical volumes? If I divide them, would it become harder to maintain when new disk space has to be added to the volume group?
I have a Fedora 8 system that uses LVM on one of it's drives (/dev/sdb2). One of the logical volumes is getting full (LogVol02). There is an unused, unmounted logical volume (LogVol03) available. I can see two possible options.
1) Mount the unused logical volume (LogVol03) on a new mount point (/home2) and create more space there 2) Delete the LogVol03 logical volume and extend the nearly full volume (LogVol02) into the now available space.
Option 2 seems like the better approach, since it will seem seamless to the system users. I'm looking for suggestions on how I should go about doing this and what I need to look out for. Is it better to use the command line tools (lvm ...) or the GUI (system-config-lvm) to do this?
My computer: (Lenovo T61 Thinkpad, running fc11 for about 2 and half months). Apparently I when I made my partitions I didn't leave quite enough room in my root directory, because I just completely ran out. Here is how my hard drive is partitioned:
The root had about 15 gigs on it, which just filled up. When I restarted to see if that would help, when it rebooted it went fine up to the log-in screen. Instead of the usual fedora blue background, it was black except for the log-in window, which looked very low-res. A little pop-up kept coming up saying the GNOME power configuration settings failed to load or something. When I logged in, the whole screen was black except for the mouse, and I could get no response. I have plenty of space left in home, so I rebooted to rescue mode using the first fedora installation disk, and tried the following command:
lvreduce -L90G /dev/mapper/DRIVE
which only returned:
lvreduce: relocation error: lvreduce: symbol dm_tree_node_size_changed, version Base not defined in file libdevmapper.so.1.02 So I couldn't reduce the size of home, and thus couldn't increase the size of root.
a) the lack of memory in root the probable cause for my computer not working
b) there a good way to reduce home and increase root while running this live disk
Note: When I am looking at it now in the logical volume manager, it says that on the whole physical volume there is only 400MB free. However, when I last looked (about 30 mins before I started having problems) it said there were about 100 Gb free.
Edit: Nevermind. I did some more research and it turned out to be more of a gnome power manager thing rather than a memory space thing, although I'm certainly going to increase my root memory now.
I've got a big problem. Earlier this afternoon I tried to unlock my screen, but the password dialog didn't appear (the background did, and I could move the pointer, but no dialog). So I restarted the computer, only for the Fedora bootup icon to get about 3/4 of the way full before the screen blanked out and I got the message "Boot has failed. Sleeping forever." I booted into the liveCD and opened the system installer to see if maybe I could just reinstall the system in place while leaving my data intact. When I got to the partitioning stage, my old partition layout was there...except one LVM volume group was totally missing. And this is the volume group that contained my / and /home, among other things. Another volume group sitting on a different RAID was still there, but ironically it was the one for short-term data.
I have three hard drives, using soft RAID and LVM. Each drive is split into 4 partitions. The first partition of each is part of a RAID-1 where /boot sits. The second of each makes up a RAID-5 on which sits my "Main" volume group for my important data (this is the one that has gone AWOL). The third of each makes up a RAID-0 on which my "Volatile" volume group sits (for /tmp, /var/tmp, and the like). The fourth is swap.
Is there any chance I can restore my volume group so my data can be recovered? I'm not sure if I've got the full layout with volume sizes written down anywhere.
To manage it more easy, I tied 2 harddisk in LVM. And I made an logical volume. It used ext4 for it's filesystem.
Today, I wanted to format and reinstall the system. So I booted the system using Ubuntu CD. But managing the partition, I accidently delete the logical volume. Because backup(/etc/lvm) was in itself, I couldn't restore the old config. I just create new logical volume.
As I expected, I couldn't mount it correctly. Mount said that "Mount: Mouting failed A on B! Invalid argument!"
I must recover it, because it has a lot of import data. What should I do?
There are 2 volumes on single group. The boot partion is a physical volume and the system is a logical volume. The disk has more room up to 40GB. How can I extend the logical volume. Tried system-config-lvm, but it does not gives the option.
I just read and learned about logical volume management today. I have a server running RHEL5.4, LVM2. I have 1 physical volume, with one volume group, and 3 logical volumes. I have no free extents, nor do I have any in my volume group (not sure if it's possible to have free in one and not the other anyway), and I am running out of space on one of my logical volumes. Doing a df -h shows 96% of 9.7GB used on /dev/mapper/MainVG-root, mounted at /. So here's the stupid question: how can I find out what directories/files are taking up what space within this logical volume? As I said I have 3 all together, and the other 2 are mapped to /var and a /var pgsql sub-directory. I figured I could get the sizes of the other directories under / and drill down accordingly, but I seem to be missing some basic rule because the commands I am using and the values I am getting don't add up.
For example, it seemed logical to me to do an ls -lsh on / to try and identify the largest directories. Each directory is listed as being ~4-8K in size. That doesn't make sense to me. So I decided to do a du -sh on each directory. Having done this on all of the / sub-directories and added up those values, there is not enough reported usage here to equal 8.9GB of used space (as df -h / reports).how they would find out how the 9.7GB here is being allocated? Preferably without scripts as I am not ready to add a layer of complexity to this yet without understanding some fundamentals.