Ubuntu Installation :: LVM2: Resize The Physical Volume
Apr 11, 2011
I have a 500GB hard disk, /dev/sda. On it, there is /dev/sda1 for /boot, /dev/sda2 for an LVM PV (physical volume), and /dev/sda3 for another /boot (multiple Linux distros, one boot partition for grub legacy, another for grub2). so the LVM2 partition, /dev/sda2, is taking up ~465GiB. I want to add another OS (non-Linux), so I resized the *lvm2 physical volume* to 320GiB, successfully, using pvresize.
However, I now need to resize the partition so the lvm2 physical volume only just fits on it, ie to 320GiB. My plan of action is to use gparted (the partition table is GUID, so fdisk won't work), to first delete the partition from the partition table, then re-add it but this time with a smaller value (~320GiB). The problem is that I need to know exactly how many MiB/cylinders the physical volume is taking up. So, I run:
Code:
root@sysresccd /root % pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name vg0
[code]....
What one of these values do I need to set the new lvm2 replacement partition to?
I just installed linux fedora 14 on my hp probook 4320s with installation CD with this name: Fedora-14-x86_64-Live-Desktop. Then I installed it on the hard disk. During installation I chose to encrypt hard disk. When I try to access my hard disk it says "unable to mount 250GB LVM2 Physical Volume, Not a mountable file system". What can I do to get access to my hard disk?
When I installed Ubuntu, I created an 52 gb encrypted partition which shows up in the disk utility, and in the window that opens when I click on the "home folder" icon. I get my normal windows partition, and under that the 52 GB LVM2 partition. But when I try to access it, I get this error.
Unable to mount 52 GB LVM2 Physical Volume - not a mountable file system
This is what fdisk -l shows
Device Boot Start End Blocks Id System /dev/sda1 * 1 52 409600 27 Unknown Partition 1 does not end on cylinder boundary. /dev/sda2 52 30452 244193280 7 HPFS/NTFS
[Code]....
How can I fix this, and be able to access that 52gb partition? This is only my second day that I work with Ubuntu, so If more information is needed then let me know
I recently installed fedora on my system along with windows in a dual boot unfortunately, the fedora partition is too big and is taking 80% of my disk space. lvm2 volumes are not recognised by windows so i decided to shrink my fedora lvm2 partition and create a new fat32 partition to store common data. i tried gparted from my ubuntu 10.04 CD but it was unable to resize the partition can someone suggest to me a GUI tool which could do the the resizing of an lvm2 partition?
Can someone help me understand by giving me the commands I need in order to shrink my "debian-home" logical volume by 10GBs and increase the size of my "debian-root" logical volume by that same 10GB of data? (Everything in that computer is ext4 including the /boot ... physical volume? (I think that's what it's called))I would REALLY appreciate it if someone could just give me the exact or approximate terminal commands that I would need to use. I assure you, I will never forget them
I was trying to remove the physical volume from an old drive. So I opened gparted and told it to rewrite the partition table. The only problem is I targeted the wrong volume, I wiped the partition table on my 4tb raid5 array This 4tb array has everything! All my movies, tv shows, music. The only things I have backup up off site are my smaller files like documents. I was about to lose my whole media collection.
I did some research and found a solution that I will post here in the hopes that someone will google "I deleted the partition table on my lvm" and be find the solution.You should find in your filesystem a /etc/lvm/backup folder. LVM puts a copy of the crucial lvm information there every time you change the the volume group.
In this folder you will find a file for each volume group. In this file you will find the uuid for all of the physical volumes that make up that group.The first step is to recreate each physical volume with their original uuids. In my case I had only 1 physical volume, which was my raid5 array. My recreation command looked like this:
Now I have a physical volume with the same uuid it had before. It is essential that you correctly match up the uuids with the correct physical deviecs.The recreated pv is empty, the volume group needs to be recovered. This is done by using a special tool and the backup file. For me the command looked like this:
vgcfgrestore --file /etc/lvm/backup/raid5 raid5
This tells it to recreate the volume group using the information in the backup file. The backup files looks for the uuid of the PV, which now matches the correct volume. The coordinates in the backup file match up to the data on the array an suddenly everything is back!
When I deleted my LVM partition table I did not damage any of the actual volumes on the volume group, I just wiped out the table of contents. The backup file had the information needed to rewrite this table of contents.
so i have f12 installed on my hd with lvm using the whole extent of the HD , i want to reduce it so i can dual boot it with a windows system, i managed to reduce the logical volume to free some space, but i cant seem to reduce the physical volume, is this possible and how ?
We had to reboot a server in the middle of a production day due to 99% iowait (lots of processes in deep sleep waiting for disk iops). That's never happened to us before. It had been 363 days since the last fsck, so it started automatically on reboot. It hung at 4.8% on a 2TB LVM2 volume for about an hour. I killed the fsck and rebooted the server. The second time, it went past that point and is currently at about 62%. First, what causes e2fsck to hang like that? Second, is there any danger in killing e2fsck, rebooting, and starting it again?
I used parted to create a partition inside the logical volume, and then merrily used that partition, which appeared as /dev/pv-whatever/lv-whateverp1
Of course I created the FS as ext3.
So, after a reboot, I can't access anything in that logical volume with standard tools, as /dev/pv-whatever now only has the lv-whatever special file inside.
I can look inside the LV with parted fine, but parted can't copy ext3 filesystems.
Is there any way to get the data out of a partition created INSIDE a logical volume if that filesystem is ext3?
I was running through a fairly routine Gentoo install on a 160G hard disk. My intention was to have two partitions on the disks: one for boot, and one to be an LVM physical volume. In a stroke of absent-mindedness, however, I forgot to create the boot partition and only created the LVM physical volumend didn't realize ituntil the end of the installation.Anyway, I just want to shrink the physical volume partition and add in another partition with fdisk. However, this doesn't seem to be working the way I intend. I ran
Code: livecd dev # pvresize --setphysicalvolumesize 159G /dev/hda1 WARNING: /dev/hda1: Overriding real size. You could lose data.
So I got a bad physical volume inisde my logical volume. I want to do this safely rather than tinkering around, how can I get the bad physical disk out and look at the data on the other 2 drives to see if I can save anything? Its just the standard fedora setup where it combines all the disks, nothing fancy.
I have the volume group activated as a partial, and now I just want to see the data on the other sections how could I mount that?
Let's assume I have a volume group (VG) with six physical volumens (PV) - sdb1, sdb2, sdb3, sdc1, sdc2, sdc3..I want to remove one of the PVs from the group in order to use its space elsewhere - how can I know if it's safe? How can I do that without losing data and without first "pvmove"ing it elsewhere?Reading a bit more, my guess is using the result of pvscan, but I thought I'd ask before removing keeping it safe as I'm not an LVM expert.
WHat is the physical volume in LVM's? Why do we need to create a physical volume first before creating LVM's? I mean, LVM's are created from physical disks, so why do we need to specify it? Didnt get it. Anybody want to help me with this?
How do I find the OID code for a physical volume.I managed to get it to work with our snmp monitoring software to alert me when disk space was < 10% but the computer which was running the SNMP monitoring died.For the life of me I can't remeber how I got it to work.I have 4 partitions 1 has 88% free /etc/mapper/volgroup002 has 21% free /boot3 nfsd 0 bytes4 sunrpc 0 bytesHere is a copy of the OID I'm using 1.3.6.1.2.1.25.2.3.1.5.1 I change the last number to resemble the drive but i'm testing using 8% and they each return an error drive space low which is what the VB script tells it to do. I know the script works as I use it on Windows Servers no problems.I do an SNMPWALK on the server and it validates the above OID with HOST-RESOURCES-MIB::hrStorageSize so I know thats valid.But thats where I'm stuck. What value should I see if I were to use this OID 1.3.6.1.2.1.25.2.3.1.6.1 which is for free disk space.
I have kind of test partition, but I need lv_root on it. So I have:
Code: Using physical volume(s) on command line PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 0 6859 lv_root 0 linear /dev/sda6:0-6858 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 6859 128 lv_swap 0 linear /dev/sda6:6859-6986 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 6987 512 lv_root 6859 linear /dev/sda6:6987-7498 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 7499 2 lv_swap 128 linear /dev/sda6:7499-7500 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 7501 110 0 free I want to move "lv_swap" to the end of VG. I want to delete its segment, and rest of VG to use for "lv_root".
I have a 7.9 TB logical volume I've created from 8 1 TB RAID 0 devices. The volume is formatted with XFS so I can resize when ready. However, I think I want to do something that is not possible. I have 2.5 TB free on my logical volume. I'd like to shrink the volume down to be 6 TB by getting rid of 2 of the 1 TB devices in the physical volume. However pvmove seems to require free extents in order to work. Do I need to add 6 TB of storage, pvmove everything onto it, and then decommission the original 8 1 TB physical devices from the volume?
How to create multiple Logical Groups out of a single Physical Volume? Here is the Physical Volume I have created:
Code: # pvdisplay --- Physical volume --- PV Name /dev/sda9 VG Name myVG1 PV Size 54.88 MB / not usable 2.88 MB Allocatable yes PE Size (KByte) 4096 Total PE 13 Free PE 11 Allocated PE 2 PV UUID bon4Ao-vmgC-aP1h-EC9X-w3tN-YXNu-0N2dAw
This is how I am creating a Logical Group out of the above Physical Volume:
Code: # vgcreate myVG1 -s 4m /dev/sda9 Display:
Code: # vgdisplay --- Volume group --- VG Name myVG1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 52.00 MB PE Size 4.00 MB Total PE 13 Alloc PE / Size 2 / 8.00 MB Free PE / Size 11 / 44.00 MB VG UUID O6ljYC-bflz-EUTd-nf34-8gYe-Fh39-Bh3cOg
But I am unable to create one more Logical Group out of this Physical Volume. Can we accomplish it? Or do we always extend our current Logical Group to utilize the available space of a Physical Volume?
Ubuntu 10.04.1 I was just tidying up my panel when I notice the Volume icon is taking up the space of two icons: I can right-click and Move it to the left or right
how easy it would be to read the contents of a physical disk that was part of a larger logical volume. The disk contains a "Linux LVM" partition that spans its entire size. My problem is that one of my disks died, and I have to send it back for a warranty replacement. However, the disk is dead, and I can't zero it out. I'm just trying to assess how difficult it would be (or at least how likely it would be) for a tech that's checking out the disk to get at the data.
I'm sure many of you here have worked with disk quotas and lvm2 and my problem involves both. Basically what I'm wanting to do is have it so whenever a logical volume gets below a certain constraint (10Gb's) ie. it only has that much left - I want to automatically resize it to add 20 GB's. Obviously this can be done rather easily manually, and with a bit of python hacking it can be done programmatically but since this is for production use I was wondering if there was something a bit more fluid. Since this server is I/O intensive ZFS implemented via FUSE is not an option and neither is the still unstable BtrFS.
Ok so I have one drive. /boot /lv_root and /lv_swap
At the end of the drive I have 32 gigs of free space still contained in the logical volume group. I want to remove it from the LVG but this is on one device. Supposedly there is a way to do this, pvresize and fdisk.
[URL]
Quote:
Originally Posted by source
#I've tried to shrink the PV with pvresize which didn't throw errors -
Good.
#but fdisk still shows me the same LVM partition size as before.
That's normal. pvresize "just" updates the PV header and VG metadata.
#So I guess the partition table has to be modified somehow?
Yes. That was mentioned in my reply: "Then shrink the partition in the partition table."
You can use fdisk or any other partition table editor for this. Some don't support resizing a partition. In that case, you can delete and create a smaller one. If doing the delete/create dance, you *must* create the new partition on the same cylinder boundary as the current one to preserve the current data.
Ive read from every source on LVM its not possible to do this. Why on earth would any Linux developer put LVM on a single drive system by default? Were they even paying attention? I dont mean to go off on a rant but if there are multiple drives LVM makes sense. However if you only have one large drive LVM holds your system hostage and you have to crawl thru the pit of hell to get it back.
I understand you have a choice in the matter when you install Fedora but its really the worst possible choice for default. Many newcomers to Linux run into this problem with LVM. If you cannot resize LVG's the software should have never been put into a Linux distro in the first place.
I have installed my Fedora on a LVM2 group and alocated a total of 10 GB. Which of course is abusrdly and ridiculously low space. As a matter of fact I did even more stupid thing - I allocated 4 (four) gigabytes for /swap !
I am complete novice in Linux and fedora, but I want to extend my /root lvm drive with at least 20 gb.
I burned parted magic on a CD and tried to manage the LVM2 grop, but it said LVM2 was not supported in parted magic. And so I tried the Fedora Partition Manager and got lost in what and how. I tried reducing the /swap space and increasing the / space, but failed - I could only select zero Mbytes for swap space, and had the only option of decreasing space for /, which is really not what I want to do.
What I want to do is extend the space for the whole LVM2 Logical Group , which is now 10 GB total for / and /swap. Or at least I'd like to reduce my /swap size and increase my / size.
This is my specific solution to my specific problem. After updating to Squeeze from my prior Lenny distro (amd64 with whole disk encrytion using LVM2, dm-crypt, LUKS) everything went well - at first. I was duped like so many, thinking that all was well and I could remove the legacy-grub (aka: Grub1) and just use grub-pc (aka: Grub2). As soon as I removed the legacy-grub and rebooted my laptop, I was confronted with:
GRUB Loading stage1.5 GRUB loading, please wait..Error 15 At this point I wasn't sure if it was a Grub problem or a deeper encryption problem - especially after reading that some people had missing packages in Squeeze (lvm2, dm-setup, initramfs-tools, etc.)
Okay, the solution for me.
1. download and burn to disk: debian-live-6.0.0-amd64-rescue.iso[URL]..
2. scroll to and press enter/return on: text rescue
3. choose a root directory - for example: /dev/blah/root (I wrote down the list of possible /dev/.... for reference - this helped me remember where and what I had partitioned in Lenny)
4. choose: Execute a shell in /dev/blah/root
5. once in the shell, I discovered I needed to mount a few of those partitions that I had written down in order to get access to grub-probe, update-grub, grub-install, etc. You may not have to if your partitions are minimal. I you need to use other partitions, type (for example):
I need to install a lvm2 group with encryption and have the /boot file stored within. Is this possible in Fedora's graphical installer? I know it can be achieved in Arch(I know I'll need grub2, I assume that's coming in Fedora 12) I can always install it separately.
I am planning to install 10.4 when it arrives. And am not going to upgrade because i upgraded from 9.04 to 9.10 so now i need to refresh the system.But I have all my partitions except root using lvm2 logical volumes. My question is : What is the safest procedure to install 10.4 on an existing lvm2 without losing my files/partitions