General :: List Filesystems That Are Using LVM Logical Volumes?
Jun 23, 2011
I've read the first 40% of the RHEL 5 Logical Volume Manager Administrator's Guide, but still have one outstanding, burning question.
During the installation of Centos 5.6, I set up LVM physical volumes, volume groups and logical volumes. I can list these using pvdisplay, vgdisplay and lvdisplay commands.
How would I list what filesystems I have that are using my logical volumes?
How would I go about encrypting my lvm2 logical volumes on Debian Squeeze? Is it possible without backing everything up to a different drive and restoring afterwards?
I have done a recent install of Debian squeeze on a laptop. I set up LVM with 3 LV's, one for the root filesystem, one for /home, and another for swap. I then used lvextend to increase the size of the LV's. This additional space is shown if I enter lvdisplay (shortened for clarity):
However, if I use df, it still shows the previous size. /dev/mapper/auriga-root 14G 8.0G 5.2G 61% / /dev/sda1 221M 16M 193M 8% /boot /dev/mapper/auriga-home 147G 421M 139G 1% /home
I have even tried restarting as well. I do not understand why df would still show that /home is 147GB, when I extended it to 169GB using lvextend. Similarly for the root, which was extended by 2GB from 14GB to 16GB.
I have a system where the logical volumes are not being detected on boot and would like some guidance as to how to cure this. The box is a Proliant DL385 G5p with a pair of 146 GB mirrored disks. The mirroring is done in hardware by an HP Smart Array P400 controller. The mirrored disk (/dev/cciss/c0d0) has 4 partitions: boot, root, swap and an LVM physical volume in one volume group with several logical volumes, including /var, /home and /opt.
The OS is a 64-bit RHEL 5.3 basic install with a kernel upgrade to 2.6.18-164.6.1.el5.x86_64 (to cure a problem with bonded NICs) plus quite a few extras for stuff like Oracle, EMC PowerPath and HP's Proliant Support Pack. The basic install is OK and the box can still be rebooted OK after the kernel upgrade. However, after the other stuff goes on it fails to reboot.
The problem is that the boot fails during file system check of the logical volume file systems but the failure is due to these volumes not being found. Specifically the boot goes through the following steps:
Red Hat nash version 5.1.19.6 starting setting clock starting udev loading default keymap (us) setting hostname No devices found <--- suspicious? Setting up Logical Volume Management:
fsck.ext3 checks then fail with messages: No such file or directory while trying to open /dev/<volume group>/<logical volume> There are also messages about not being able to find the superblock but this is clearly due to the device itself not being found. If I boot from a rescue CD all of the logical volumes are present, with correct sizes; dmsetup shows them all to be active and I can access the files within. Fdisk also shows all the partitions to be OK and of the right type. I am therefore very sure that there is nothing wrong with the disk or logical volumes....
I have let the debian installer set up with separate partions forrootusrvarhometmpIt ended up with a huge home partition and little place for the others.So I wanted to give some of home's space to the others and didlvreduce on homelvextend on the others.Following some info on the net it tells you toe2fsck -f partition1 followed by aresize2fs partition1But when I try to fsck the reduced home partition I got the following error:The filesystem size (according to the superblock) is 73113600 blocksThe physical size of the device is 20447332 blocksEither the superblock or the partition table is likely to be corrupt!Abort? yesIs there any way to save this?
I would like to ask if is it possible to boot Slackware with the installation CD when in a pinch with a system on logical volumes? For the usual fdisk partitions the procedure is known:
Code: boot: root=/dev/sda1 noinitrd ro or something like that. This way, the system boots with mounted basic partitions. My question is whether there is an option to achieve the same if the system is installed on logical volumes? I need to do this on a machine with dual booting Windows + Linux. The Windows needs to be reinstalled, but as is well known, the boot sector will then be overwritten. So after the Windows reinstallation I will need to boot Slackware with the installation CD and run lilo.
Does everybody do major upgrades in place on production servers?Am I over-engineering by creating a new logical volume, syncing my working root volume to it and upgrading the new volume to test services? Then, after a week or 2 or 4, removing the old LV...
I inherited a 3ware 9550SX running a version of gentoo with a2.6.28.something kernel. I started over with CentOS 5.6 x86_64.tw_cli informs me that the 9-disk RAID 5 is healthy.The previous admin used lvm (?) to carve up the RAID into a zilliontiny pieces and one big piece. My main interest is the big piece.Some of the small pieces refused to mount until I installed theCentOS plus kernel (they are reiserfs).The remainder seem to be ext3; however, they are not mounted at boot("refusing activation"). lvs tells me they are not active. If I try tomake one active, for example:root> lvchange -ay vg01/usrI get:Refusing activation of partial LV usr. Use --partial to override.If I use --partial, I get:Partial mode. Incomplete logical volumes will be processed.and then I can then mount the partition, but not everything seems tobe there.
Some of the directory entries look like this:?--------- ? ? ? ? ? logfilesIs it possible that the versions of the kernel and lvm that wereon the gentoo system are causing grief for an older kernel (andpossibly older lvm) on CentOS 5.6 and that I might have greaterfortunes with CentOS 6.x ?Or am I missing something fundamental? This is my first experiencewith lvm, so it's more than a little probable.
So today I needed to switch from openSolaris to a viable OS on my workstation and decided to install openSUSE after having good experiences with it on my personal laptop. I ran into some problems partitioning one of the two hard disks installed on the system. I was limited on the amount of time I could spend at the office doing the install so I decided to use LVM on the one hard disk that seemed to work okay.
I picked LVM because although I don't know much at all about LVM, I at least know enough that it would allow me to expand the root and home partitions once I get the 2nd hard drive working correctly. So now that I've gotten the 2nd disk working okay, I've created two physical volumes on the 2nd drive, one to expand the root partition and one to expand the home partition. So, my question is, can I expand the root an home partitions while they are mounted or should I boot into a live CD environment before I expand the partitions? If I could expand them without booting into a different environment, that would be so great as I don't want to have to drive out to the office again before Monday. BTW, I am a new openSUSE user and an ex Ubuntu user. I loved the Ubuntu forums but had to switch because I do not agree with the direction that Ubuntu is taking.
and I'm dumped into recovery mode. However, if I remove these mounts from /etc/fstab via comments, I can wait for the system to boot (which it does very quickly) then mount the mapper devices myself. So what is going on? Has something changed wrt logical volumes, or is this just systemd? I can live with manual mounting, but any advice on resolving the automatic mounting situation would be great.
what the maximum number of logical volumes is for a volume group in LVM ? Is there any known performance hit for creating a large number of small logical volumes vs a small number of large volumes ?
I have Fedora Core 8 installed. I would like to reinstall it so as to get back commands that have been lost. To preserve my user data that has been stored in logical volumes, what selections should I make in the installation process? Are these selections essentially the same for Fedora Core 10?
i have a fedora 11 server which can't access the ext4 partitions on lvm logical volumes on a raid array during boot-up. the problem manifested itself after a failed preupgrade to fedora 12; however, i think the attempt at upgrading to fc12 might not have anything to do with the problem, since i last rebooted the server over 250 days ago (sometime soon after the last fedora 11 kernel update). prior to the last reboot, i had successfully rebooted many times (usually after kernel updates) without any problems. i'm pretty sure the fc12 upgrade attempt didn't touch any of the existing files, since it hung on the dependency checking of the fc12 packages. when i try to reboot into my existing fedora 11 installation, though, i get the following screen: (click for full size) a description of the server filesystem (partitions may be different sizes now due to the growing of logical volumes):
Code:
- 250GB system drive 250MB/dev/sdh1/bootext3 lvm partition rest of driveVolGroup_System 10240VolGroup_System-LogVol_root/ext4
[code]....
except he's talking about fake raid and dmraid, whereas my raid is linux software raid using mdadm. this machine is a headless server which acts as my home file, mail, and web server. it also runs mythtv with four hd tuners. i connect remotely to the server using nx or vnc to run applications directly on the server. i also run an xp professional desktop in a qemu virtual machine on the server for times when i need to use windows. so needless to say, it's a major inconvenience to have the machine down.
Is there a limit to the number of partitions/logical volumes you can create using the partman-auto recipes? If not, any thoughts on why my preseed using the values included below results in only a /boot partition and logical volumes root, swap, and user? Is there another way to achieve putting /, /tmp, /var, /usr, /usr_local /opt, etc on their own logical volumes with preseeding?
After fixing drive partition numbers, I got the following error from cfdisk: Code: FATAL ERROR: Bad logical partition 6: enlarged logical partitions overlap Press any key to exit cfdisk However, I can see all my partitions with fdisk and gparted, I can mount and use all of them.I used the following guide to fix the drive numbers order: Reorder partition drive numbers in linux | LinkedBits Does somebody know whet is cfdisks problem and how can I fix it?
I want to know what file system there are for "/dev" directory??apparently different types was developed for managing devices on Linux.because I am a little confused about all file system on this directory.Another question is,which is file system sufficient for managing devices on Embedded Linux ?of course on our embedded system we do not have many pluggable device,so this directory can be static
I'm now using Ubuntu 10.04 with ext4 and for the second time in my life I experienced data loss (not for real: I got backups) and I'm assuming problems with the recent ext4 fs.
I want to restore all of my configurations (/etc and the like), data and home on reiserFS: is this possible? What to do in order to accomplish that?
I am tired of watching fsck check my filesystem when my eeepc 901 shuts down abruptly due to a crash. I know that with a journaling filesystem, I won't have to wait for a check. However, I am well aware of the poor I/O performance of the SSD, so I can imagine using a journaling filesystem being even more frustrating, since there will be constant writes to the journal? I will buy a new laptop without such a crummy ssd someday but, is there anything I can do now, on the software side of things?
I need to combine 6 different filesystems into one filesystem using rsync. I am so confused as to which parameters I need to use. The 6 fileystems are:
When I try to start up 9.10 I can get past GRUB but not fsck. A file check will be started but no progress will be made and finally I get the 'General error mounting filesystems'.Trying in recovery mode I get this just before the fsck check
Quote: fsck from util-linux-ng 2.16 [8.016378] ACPI: I/O resource piix4_smbs [0xb00-0xb07] conflicts with ACPI region SOR 1 [0x0b00-0xb0f] /dev/sda1 goes fine Quote: /dev/sda3 has been mounted 34 times without being checked, check forced mountall: canceled
[Code]...
This seems to be slightly different than the other threads I've seen discussing this issue. It just all of a sudden happened, I didn't upgrade anything or have any crashes immediately prior.
I want the filesystem of my external drive to be checked periodically after a numer of mounts. I put 2 in the sixth colums of fstab for this partition
Code: /dev/sdb1/mnt/hdext3rw,dev,sync,user,noauto,exec,suid02 and I use the tune2fs to set the maximum mount count to 32. Code: tune2fs -c 32 /dev/sdb
now the mount count is 34 and the date of the last check is not recent, so apparently the auto fsck has not been performed. Probably because this partition is not mounted at start-up but I usually mount it manually.
We are about to install some RHEL5 servers for DB and web and Application servers. I've been asked to test which file systems of the following work better for disk I/O.EXT3 EXT4 JFS BtrFS and any other ones that I can find that work under RHEL the out-of-box install I have done allows me to format my volumes with EXT2/3 but not ext4 or any others. Is it possible to "install" other filesystems for use, if so how?
I know I can simply create a degraded raid array and copy the data to the other drive like this: mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
But I want the specific disk to keep the raw ext3 filesystem so I can still use it from FreeBSD. When using the command above the disk will be a raid disk and I can't do a mount /dev/sdb1 anymore. A little background info. The drives in question are used as backup drives for a couple of Linux and FreeBSD servers. I am using the Ext3 filesystem to make sure I can quickly recover the data since both FreeBSD and Linux can read from that without problems. If someone has a different solution for that (2 drives in raid 1 that are readable by FreeBSD and writeable by Linux),