WHat is the physical volume in LVM's? Why do we need to create a physical volume first before creating LVM's? I mean, LVM's are created from physical disks, so why do we need to specify it? Didnt get it. Anybody want to help me with this?
Let's assume I have a volume group (VG) with six physical volumens (PV) - sdb1, sdb2, sdb3, sdc1, sdc2, sdc3..I want to remove one of the PVs from the group in order to use its space elsewhere - how can I know if it's safe? How can I do that without losing data and without first "pvmove"ing it elsewhere?Reading a bit more, my guess is using the result of pvscan, but I thought I'd ask before removing keeping it safe as I'm not an LVM expert.
How do I find the OID code for a physical volume.I managed to get it to work with our snmp monitoring software to alert me when disk space was < 10% but the computer which was running the SNMP monitoring died.For the life of me I can't remeber how I got it to work.I have 4 partitions 1 has 88% free /etc/mapper/volgroup002 has 21% free /boot3 nfsd 0 bytes4 sunrpc 0 bytesHere is a copy of the OID I'm using 1.3.6.1.2.1.25.2.3.1.5.1 I change the last number to resemble the drive but i'm testing using 8% and they each return an error drive space low which is what the VB script tells it to do. I know the script works as I use it on Windows Servers no problems.I do an SNMPWALK on the server and it validates the above OID with HOST-RESOURCES-MIB::hrStorageSize so I know thats valid.But thats where I'm stuck. What value should I see if I were to use this OID 1.3.6.1.2.1.25.2.3.1.6.1 which is for free disk space.
how easy it would be to read the contents of a physical disk that was part of a larger logical volume. The disk contains a "Linux LVM" partition that spans its entire size. My problem is that one of my disks died, and I have to send it back for a warranty replacement. However, the disk is dead, and I can't zero it out. I'm just trying to assess how difficult it would be (or at least how likely it would be) for a tech that's checking out the disk to get at the data.
so i have f12 installed on my hd with lvm using the whole extent of the HD , i want to reduce it so i can dual boot it with a windows system, i managed to reduce the logical volume to free some space, but i cant seem to reduce the physical volume, is this possible and how ?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
I was running through a fairly routine Gentoo install on a 160G hard disk. My intention was to have two partitions on the disks: one for boot, and one to be an LVM physical volume. In a stroke of absent-mindedness, however, I forgot to create the boot partition and only created the LVM physical volumend didn't realize ituntil the end of the installation.Anyway, I just want to shrink the physical volume partition and add in another partition with fdisk. However, this doesn't seem to be working the way I intend. I ran
Code: livecd dev # pvresize --setphysicalvolumesize 159G /dev/hda1 WARNING: /dev/hda1: Overriding real size. You could lose data.
So I got a bad physical volume inisde my logical volume. I want to do this safely rather than tinkering around, how can I get the bad physical disk out and look at the data on the other 2 drives to see if I can save anything? Its just the standard fedora setup where it combines all the disks, nothing fancy.
I have the volume group activated as a partial, and now I just want to see the data on the other sections how could I mount that?
I have kind of test partition, but I need lv_root on it. So I have:
Code: Using physical volume(s) on command line PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 0 6859 lv_root 0 linear /dev/sda6:0-6858 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 6859 128 lv_swap 0 linear /dev/sda6:6859-6986 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 6987 512 lv_root 6859 linear /dev/sda6:6987-7498 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 7499 2 lv_swap 128 linear /dev/sda6:7499-7500 /dev/sda6 VolGroup lvm2 a- 29.73g 440.00m 7501 110 0 free I want to move "lv_swap" to the end of VG. I want to delete its segment, and rest of VG to use for "lv_root".
I have a 500GB hard disk, /dev/sda. On it, there is /dev/sda1 for /boot, /dev/sda2 for an LVM PV (physical volume), and /dev/sda3 for another /boot (multiple Linux distros, one boot partition for grub legacy, another for grub2). so the LVM2 partition, /dev/sda2, is taking up ~465GiB. I want to add another OS (non-Linux), so I resized the *lvm2 physical volume* to 320GiB, successfully, using pvresize.
However, I now need to resize the partition so the lvm2 physical volume only just fits on it, ie to 320GiB. My plan of action is to use gparted (the partition table is GUID, so fdisk won't work), to first delete the partition from the partition table, then re-add it but this time with a smaller value (~320GiB). The problem is that I need to know exactly how many MiB/cylinders the physical volume is taking up. So, I run:
Code:
root@sysresccd /root % pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name vg0
[code]....
What one of these values do I need to set the new lvm2 replacement partition to?
When I installed Ubuntu, I created an 52 gb encrypted partition which shows up in the disk utility, and in the window that opens when I click on the "home folder" icon. I get my normal windows partition, and under that the 52 GB LVM2 partition. But when I try to access it, I get this error.
Unable to mount 52 GB LVM2 Physical Volume - not a mountable file system
This is what fdisk -l shows
Device Boot Start End Blocks Id System /dev/sda1 * 1 52 409600 27 Unknown Partition 1 does not end on cylinder boundary. /dev/sda2 52 30452 244193280 7 HPFS/NTFS
[Code]....
How can I fix this, and be able to access that 52gb partition? This is only my second day that I work with Ubuntu, so If more information is needed then let me know
I have a 7.9 TB logical volume I've created from 8 1 TB RAID 0 devices. The volume is formatted with XFS so I can resize when ready. However, I think I want to do something that is not possible. I have 2.5 TB free on my logical volume. I'd like to shrink the volume down to be 6 TB by getting rid of 2 of the 1 TB devices in the physical volume. However pvmove seems to require free extents in order to work. Do I need to add 6 TB of storage, pvmove everything onto it, and then decommission the original 8 1 TB physical devices from the volume?
How to create multiple Logical Groups out of a single Physical Volume? Here is the Physical Volume I have created:
Code: # pvdisplay --- Physical volume --- PV Name /dev/sda9 VG Name myVG1 PV Size 54.88 MB / not usable 2.88 MB Allocatable yes PE Size (KByte) 4096 Total PE 13 Free PE 11 Allocated PE 2 PV UUID bon4Ao-vmgC-aP1h-EC9X-w3tN-YXNu-0N2dAw
This is how I am creating a Logical Group out of the above Physical Volume:
Code: # vgcreate myVG1 -s 4m /dev/sda9 Display:
Code: # vgdisplay --- Volume group --- VG Name myVG1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 52.00 MB PE Size 4.00 MB Total PE 13 Alloc PE / Size 2 / 8.00 MB Free PE / Size 11 / 44.00 MB VG UUID O6ljYC-bflz-EUTd-nf34-8gYe-Fh39-Bh3cOg
But I am unable to create one more Logical Group out of this Physical Volume. Can we accomplish it? Or do we always extend our current Logical Group to utilize the available space of a Physical Volume?
I just installed linux fedora 14 on my hp probook 4320s with installation CD with this name: Fedora-14-x86_64-Live-Desktop. Then I installed it on the hard disk. During installation I chose to encrypt hard disk. When I try to access my hard disk it says "unable to mount 250GB LVM2 Physical Volume, Not a mountable file system". What can I do to get access to my hard disk?
I was trying to remove the physical volume from an old drive. So I opened gparted and told it to rewrite the partition table. The only problem is I targeted the wrong volume, I wiped the partition table on my 4tb raid5 array This 4tb array has everything! All my movies, tv shows, music. The only things I have backup up off site are my smaller files like documents. I was about to lose my whole media collection.
I did some research and found a solution that I will post here in the hopes that someone will google "I deleted the partition table on my lvm" and be find the solution.You should find in your filesystem a /etc/lvm/backup folder. LVM puts a copy of the crucial lvm information there every time you change the the volume group.
In this folder you will find a file for each volume group. In this file you will find the uuid for all of the physical volumes that make up that group.The first step is to recreate each physical volume with their original uuids. In my case I had only 1 physical volume, which was my raid5 array. My recreation command looked like this:
Now I have a physical volume with the same uuid it had before. It is essential that you correctly match up the uuids with the correct physical deviecs.The recreated pv is empty, the volume group needs to be recovered. This is done by using a special tool and the backup file. For me the command looked like this:
vgcfgrestore --file /etc/lvm/backup/raid5 raid5
This tells it to recreate the volume group using the information in the backup file. The backup files looks for the uuid of the PV, which now matches the correct volume. The coordinates in the backup file match up to the data on the array an suddenly everything is back!
When I deleted my LVM partition table I did not damage any of the actual volumes on the volume group, I just wiped out the table of contents. The backup file had the information needed to rewrite this table of contents.
My raid array has failed. I have two disks /dev/sda and /dev/sdb./dev/sdb has failed and I could not rebuild the array(madm returned that the device is busy) so I rebooted the machine. After that, the whole sdb disk went missing, as it now only shows sda in fdisk -l.Did the disk went totally dead or my raid glitched?
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
I have three hard drives in my computer That I want to make RAID 0. All of them already have partitions and data on them. What I want to know is if I can, without losing data, add the disks to RAID and then merge the partitions? All the partitions are of the same type. Or would it easier/better/possible to do this with LVM? Even if I'd have to shrink partitions and copy data to a new LVM one to get it set up properly, would it be better than RAID 0?
I'm installing Ubuntu to be used as an NFS storage server for my VMWare ESX servers. I've got a server that has two 2TB drives in it. The hardware raid controller isn't an option because it only sees up to 1TB of each drive. So, I'm trying to figure out to do this using either LVM or Parted. I don't know much about doing this, and LVM was the first thing I tried but it didn't seem to do much. It looks like it just created a smaller partition to install Ubuntu on. It didn't ask me what I wanted to do with the rest of the drive space. I've messed around with Parted and am not sure what to do, to be honest. I found a few blog posts but most started off assuming that I knew how to get to where they were starting from.
I have v 10.10 running the following (and I still have no idea how I got this all working). Myth, Squeeze server, X10 control system for the house, Virtualbox (where I run some windows stuff) and a general file and print service. It has 5 disk drives attached
1 - system disk 5- external usb to back up data to 2,3,4 - 2 tb drives in a raid config
I just restarted the server and the raid volume has not mounted. I looked in the webmin interface and the device is there but it doesn't seem to have the partitions attached. Again in webmin, the partitions seem to be on each of the drives. I used crash-plan to back up the data and music, but not the video, so I would prefer not to have to rebuild the lot if I don't have to. I really don't want to re build the lot out of my lack of experience with Linux. Is there an easy (ish) way of putting the raid back together to see if it's just dropped its config and the data is in tact (or can be rebuild from two of the three drives).
Im trying to restore RHEL 5.3 which was previously installed on LVM. I have completly new disks.
1. I have started RHEL install DVD with option: linux rescue 2. set up network 3. connected to NFS server where the backup is stored 4. restored partition image: dd if=/mnt/source/layout.bin of=/dev/sda bs=1024 count=1 (this will preapare partitions like it was before backup. After that it is necessary to run fdisk /dev/sda and choose oprion "w" - write)
Now, I want to create Logical Volumes, but I have no "pvcreate" command in RHEL rescue mode.
After months of using Lenny & Lucid Lynx without issues, I come back to the good existential questions.
I'd like a completely encrypted disk (/ and swap) in addition to the Xp partitions (not that safe but I'll switch completely to Linux once I have solved everything.
1. I create an ext4 partition for /boot 2. One other (/dev/sda7) that I set for encryption, 3. On top of that, I create a PV for lvm2, 4. I add to a VG, 5. I create / & swap in the VG.
However, if I add a hard drive, I will have to encrypt the main partition, add it to the VG & then expand /. So I'll need 2 passwords at boot time to decrypt.
So I'd like to:
-Encrypt the VG directly, it would solve everything but no device file appears for the VG, only the PV and th LV.
-After hours of search, I couldn't find a solution for a single password...
Maybe a hope with a filesystem like btrfs in the future providing encryption, but I'll still have to create a swap partition out of it (or create a file for swap but no hibernation possible).