CentOS 5 :: Use E2label To Label One Of Logical Volumes
Jan 6, 2011I am trying to use e2label to label one of my Logical Volumes. the labeling is done successfully. but my findfs output is like this:
/dev/mapper/VolGroup00-TEST
I am trying to use e2label to label one of my Logical Volumes. the labeling is done successfully. but my findfs output is like this:
/dev/mapper/VolGroup00-TEST
Does everybody do major upgrades in place on production servers?Am I over-engineering by creating a new logical volume, syncing my working root volume to it and upgrading the new volume to test services? Then, after a week or 2 or 4, removing the old LV...
View 3 Replies View RelatedI inherited a 3ware 9550SX running a version of gentoo with a2.6.28.something kernel. I started over with CentOS 5.6 x86_64.tw_cli informs me that the 9-disk RAID 5 is healthy.The previous admin used lvm (?) to carve up the RAID into a zilliontiny pieces and one big piece. My main interest is the big piece.Some of the small pieces refused to mount until I installed theCentOS plus kernel (they are reiserfs).The remainder seem to be ext3; however, they are not mounted at boot("refusing activation"). lvs tells me they are not active. If I try tomake one active, for example:root> lvchange -ay vg01/usrI get:Refusing activation of partial LV usr. Use --partial to override.If I use --partial, I get:Partial mode. Incomplete logical volumes will be processed.and then I can then mount the partition, but not everything seems tobe there.
Some of the directory entries look like this:?--------- ? ? ? ? ? logfilesIs it possible that the versions of the kernel and lvm that wereon the gentoo system are causing grief for an older kernel (andpossibly older lvm) on CentOS 5.6 and that I might have greaterfortunes with CentOS 6.x ?Or am I missing something fundamental? This is my first experiencewith lvm, so it's more than a little probable.
what the maximum number of logical volumes is for a volume group in LVM ? Is there any known performance hit for creating a large number of small logical volumes vs a small number of large volumes ?
View 1 Replies View RelatedI have done a recent install of Debian squeeze on a laptop. I set up LVM with 3 LV's, one for the root filesystem, one for /home, and another for swap. I then used lvextend to increase the size of the LV's. This additional space is shown if I enter lvdisplay (shortened for clarity):
- Logical volume -
LV Name /dev/auriga/swap
LV Size 4.66 GiB
- Logical volume -
LV Name /dev/auriga/root
LV Size 15.97 GiB
- Logical volume -
LV Name /dev/auriga/home
LV Size 169.01 GiB
However, if I use df, it still shows the previous size.
/dev/mapper/auriga-root 14G 8.0G 5.2G 61% /
/dev/sda1 221M 16M 193M 8% /boot
/dev/mapper/auriga-home 147G 421M 139G 1% /home
I have even tried restarting as well. I do not understand why df would still show that /home is 147GB, when I extended it to 169GB using lvextend. Similarly for the root, which was extended by 2GB from 14GB to 16GB.
I have a system where the logical volumes are not being detected on boot and would like some guidance as to how to cure this. The box is a Proliant DL385 G5p with a pair of 146 GB mirrored disks. The mirroring is done in hardware by an HP Smart Array P400 controller. The mirrored disk (/dev/cciss/c0d0) has 4 partitions: boot, root, swap and an LVM physical volume in one volume group with several logical volumes, including /var, /home and /opt.
The OS is a 64-bit RHEL 5.3 basic install with a kernel upgrade to 2.6.18-164.6.1.el5.x86_64 (to cure a problem with bonded NICs) plus quite a few extras for stuff like Oracle, EMC PowerPath and HP's Proliant Support Pack. The basic install is OK and the box can still be rebooted OK after the kernel upgrade. However, after the other stuff goes on it fails to reboot.
The problem is that the boot fails during file system check of the logical volume file systems but the failure is due to these volumes not being found. Specifically the boot goes through the following steps:
Red Hat nash version 5.1.19.6 starting
setting clock
starting udev
loading default keymap (us)
setting hostname
No devices found <--- suspicious?
Setting up Logical Volume Management:
fsck.ext3 checks then fail with messages: No such file or directory while trying to open /dev/<volume group>/<logical volume> There are also messages about not being able to find the superblock but this is clearly due to the device itself not being found. If I boot from a rescue CD all of the logical volumes are present, with correct sizes; dmsetup shows them all to be active and I can access the files within. Fdisk also shows all the partitions to be OK and of the right type. I am therefore very sure that there is nothing wrong with the disk or logical volumes....
I have let the debian installer set up with separate partions forrootusrvarhometmpIt ended up with a huge home partition and little place for the others.So I wanted to give some of home's space to the others and didlvreduce on homelvextend on the others.Following some info on the net it tells you toe2fsck -f partition1 followed by aresize2fs partition1But when I try to fsck the reduced home partition I got the following error:The filesystem size (according to the superblock) is 73113600 blocksThe physical size of the device is 20447332 blocksEither the superblock or the partition table is likely to be corrupt!Abort? yesIs there any way to save this?
View 5 Replies View RelatedHow would I go about encrypting my lvm2 logical volumes on Debian Squeeze? Is it possible without backing everything up to a different drive and restoring afterwards?
View 3 Replies View RelatedI've read the first 40% of the RHEL 5 Logical Volume Manager Administrator's Guide, but still have one outstanding, burning question.
During the installation of Centos 5.6, I set up LVM physical volumes, volume groups and logical volumes. I can list these using pvdisplay, vgdisplay and lvdisplay commands.
How would I list what filesystems I have that are using my logical volumes?
Are there any maximum amount of logical volumes a LVM2 volume group can contain?
View 1 Replies View RelatedI upgraded aaa_elflibs as per Eric's advice here:URL..Now I can't access anything.it boots into tty1 and won't mount any of my logical volumes.
View 14 Replies View Relatedi cannot resize mounted lvm volumes with reiserfs by using yast like in a previous version 10.x !?
View 6 Replies View RelatedI would like to ask if is it possible to boot Slackware with the installation CD when in a pinch with a system on logical volumes? For the usual fdisk partitions the procedure is known:
Code:
boot: root=/dev/sda1 noinitrd ro or something like that. This way, the system boots with mounted basic partitions. My question is whether there is an option to achieve the same if the system is installed on logical volumes? I need to do this on a machine with dual booting Windows + Linux. The Windows needs to be reinstalled, but as is well known, the boot sector will then be overwritten. So after the Windows reinstallation I will need to boot Slackware with the installation CD and run lilo.
So today I needed to switch from openSolaris to a viable OS on my workstation and decided to install openSUSE after having good experiences with it on my personal laptop. I ran into some problems partitioning one of the two hard disks installed on the system. I was limited on the amount of time I could spend at the office doing the install so I decided to use LVM on the one hard disk that seemed to work okay.
I picked LVM because although I don't know much at all about LVM, I at least know enough that it would allow me to expand the root and home partitions once I get the 2nd hard drive working correctly. So now that I've gotten the 2nd disk working okay, I've created two physical volumes on the 2nd drive, one to expand the root partition and one to expand the home partition. So, my question is, can I expand the root an home partitions while they are mounted or should I boot into a live CD environment before I expand the partitions? If I could expand them without booting into a different environment, that would be so great as I don't want to have to drive out to the office again before Monday. BTW, I am a new openSUSE user and an ex Ubuntu user. I loved the Ubuntu forums but had to switch because I do not agree with the direction that Ubuntu is taking.
I have a setup that looks like this
[Code]....
and I'm dumped into recovery mode. However, if I remove these mounts from /etc/fstab via comments, I can wait for the system to boot (which it does very quickly) then mount the mapper devices myself. So what is going on? Has something changed wrt logical volumes, or is this just systemd? I can live with manual mounting, but any advice on resolving the automatic mounting situation would be great.
[Code]....
I have Fedora Core 8 installed. I would like to reinstall it so as to get back commands that have been lost. To preserve my user data that has been stored in logical volumes, what selections should I make in the installation process? Are these selections essentially the same for Fedora Core 10?
View 5 Replies View Relatedi have a fedora 11 server which can't access the ext4 partitions on lvm logical volumes on a raid array during boot-up. the problem manifested itself after a failed preupgrade to fedora 12; however, i think the attempt at upgrading to fc12 might not have anything to do with the problem, since i last rebooted the server over 250 days ago (sometime soon after the last fedora 11 kernel update). prior to the last reboot, i had successfully rebooted many times (usually after kernel updates) without any problems. i'm pretty sure the fc12 upgrade attempt didn't touch any of the existing files, since it hung on the dependency checking of the fc12 packages. when i try to reboot into my existing fedora 11 installation, though, i get the following screen: (click for full size) a description of the server filesystem (partitions may be different sizes now due to the growing of logical volumes):
Code:
- 250GB system drive
250MB/dev/sdh1/bootext3
lvm partition rest of driveVolGroup_System
10240VolGroup_System-LogVol_root/ext4
[code]....
except he's talking about fake raid and dmraid, whereas my raid is linux software raid using mdadm. this machine is a headless server which acts as my home file, mail, and web server. it also runs mythtv with four hd tuners. i connect remotely to the server using nx or vnc to run applications directly on the server. i also run an xp professional desktop in a qemu virtual machine on the server for times when i need to use windows. so needless to say, it's a major inconvenience to have the machine down.
Is there a limit to the number of partitions/logical volumes you can create using the partman-auto recipes? If not, any thoughts on why my preseed using the values included below results in only a /boot partition and logical volumes root, swap, and user? Is there another way to achieve putting /, /tmp, /var, /usr, /usr_local /opt, etc on their own logical volumes with preseeding?
View 1 Replies View RelatedI'm having trouble umounting partitions.
This is the entry I have in /etc/fstab for backup:
I can mount it ok:
But can't umount it:
Quick question: Can anyone tell me if a logical volume will persist through a CentOS 5 re-install? I hosed my system, and am working to reinstall CentOS (once I work through another problem).When I re-install, I am planning on keeping the same partitions and structure, but hope to not lose the information on my logical volumes (not the LogVolGrp, a different Logical Volume I created). Anybody know if keeping this data intact through a re-install is realistic?
View 1 Replies View RelatedI have to configure "Oracle Ent Linux 5" in different two server.
After installing the server ,I observer that the grub loader entry are different like:
Machine 1:
Machine 2:
Here , I don't understand the difference between 'LABEL=/1' and 'LABEL=/' .
We have a Centos 5.6 server mounting two iSCSI volumes from an HP P2000 storage array. Multipathd is also running, and this has been working well for the few months we have been using it. The two volumes presented to the server were created in LVM, and worked without problem.We had a requirement to reboot the server, and now the iSCSI volumes will no longer mount. From what I can tell, the iSCSI connection is working ok, as I can see the correct sessions, and if I run 'fdisk -l' I can see the iSCSI block devices, but the O/S isn't seeing the filesystems. Any LVM command does not show the volumes at all, and 'vgchange -a y' only lists the local boot volume, not the iSCSI volumes. My concern is that, the output of 'fdisk -l' says 'Disk /dev/xxx doesn't contain a valid partition table' for all the iSCSI devices. Research shows that performing the vgchange -a y command should automatically mount any VG's that aren't showing, but it doesn't work.
There's a lot of data on these iSCSI volumes, and I'm no LVM expert. I've read that some have had problems where LVM starts before iSCSI and things get a bit messed up, but I don't know if this is the case here (I can't tell), but if there's a way of switching this round that might help, I'm prepared to give it a go.There was absolutely no indication there were any problems with these volumes, so corruption is highly unlikely.
After fixing drive partition numbers, I got the following error from cfdisk: Code: FATAL ERROR: Bad logical partition 6: enlarged logical partitions overlap Press any key to exit cfdisk However, I can see all my partitions with fdisk and gparted, I can mount and use all of them.I used the following guide to fix the drive numbers order: Reorder partition drive numbers in linux | LinkedBits Does somebody know whet is cfdisks problem and how can I fix it?
View 6 Replies View RelatedI am trying to install Centos 5.3 x64_64 on old DELL server
PE 2850 with RAID 1 & 5 voluomes ON PERC 4/DC DUAL CHANNEL RAID CARD
Install can not see Raid Volumes
The server can still boot with the old OS RH EL 2x and RAID and Raid volumes are fine
I'm wondering if there is a way to shrink an ext3 LV mount as / .I tried to with resize2fs ... but seems that isn't possible if the partition is mounted.
View 8 Replies View RelatedI have a huge RAID6 array 21TB+ already partitioned in GPT. This is to be used as the storage location for my company's backup server, and I want to access it as one logical volume. Is this possible with Centos5? I just discovered the product specifications for Centos5, and saw that the maximum file system size is 16TB, but LVM2 should support volumes up to 1EB. Is there any way I can make this work in Centos, or am I going to have to run a different distro?
View 1 Replies View RelatedI have a network (192.168.x.x) that I want to keep closed and private for the most part. I need however to get access to some files generated on the machines in this private network. So I first tried putting two cards in a machine running centos 5.2 and connecting one to the private newtork and the other to the public network. This worked somewhat but I was not able to see this bridging machine in the private network because I could not run 2 samba instances on this machine ( I need one for the public network). So I setup xen on a machine with the 2 NIC's and assigned one card to the host dom and the other to the guest dom which was connected to the private network.
This worked ok, but the only issue was the shared disk space. I couldn't use nfs because each machine operates in a different subnet and I don't know how to export a nfs drive across domains. So I created a logical volume on a disk and mounted this in both domains.
Here comes the question now. This works some times,. but at other times I copy files from the private machine to the shared volume but i can't see them from the other domain. Also sometimes the guest domain which houses the private network server hangs during boot up saying that the logical volume has been assigned and cannot be mounted.
1) Is what I'm doing using logical volumes across domains legal (best practise, etc)
2) Is there another way for me to achieve what I want (sharing a disk partition across domains).
I have a network (192.168.x.x) that I want to keep closed and private for the most part. I need however to get access to some files generated on the machines in this private network. So I first tried putting two cards in a machine running centos 5.2 and connecting one to the private newtork and the other to the public network. This worked somewhat but I was not able to see this bridging machine in the private network because I could not run 2 samba instances on this machine ( I need one for the public network). So I setup xen on a machine with the 2 NIC's and assigned one card to the host dom and the other to the guest dom which was connected to the private network.
This worked ok, but the only issue was the shared disk space. I couldn't use nfs because each machine operates in a different subnet and I don't know how to export a nfs drive across domains. So I created a logical volume on a disk and mounted this in both domains. This works some times,. but at other times I copy files from the private machine to the shared volume but i can't see them from the other domain. Also sometimes the guest domain which houses the private network server hangs during boot up saying that the logical volume has been assigned and cannot be mounted.
1) Is what I'm doing using logical volumes across domains legal (best practise, etc)
2) Is there another way for me to achieve what I want (sharing a disk partition across domains).
I'm sure many of you here have worked with disk quotas and lvm2 and my problem involves both. Basically what I'm wanting to do is have it so whenever a logical volume gets below a certain constraint (10Gb's) ie. it only has that much left - I want to automatically resize it to add 20 GB's. Obviously this can be done rather easily manually, and with a bit of python hacking it can be done programmatically but since this is for production use I was wondering if there was something a bit more fluid. Since this server is I/O intensive ZFS implemented via FUSE is not an option and neither is the still unstable BtrFS.
View 3 Replies View RelatedI ran yum update on my centos 5.6 box a couple of days ago and following this the system would not reboot, dont recall the exact error and don't seem to be able to find it logged anywhere but it was something to do with LVM not being able to find a disk.
In the end I have booted to linux rescue and edited my /etc/fstab file so the system does not try to mount the offending volume group. This enables the system to boot but I need to find out what is wrong with the system and get this volume group accessible again. Here is my edited fstab showing the commented out line. code...