OpenSUSE Install :: Expanding Root And Home Logical Volumes
Jul 9, 2011
So today I needed to switch from openSolaris to a viable OS on my workstation and decided to install openSUSE after having good experiences with it on my personal laptop. I ran into some problems partitioning one of the two hard disks installed on the system. I was limited on the amount of time I could spend at the office doing the install so I decided to use LVM on the one hard disk that seemed to work okay.
I picked LVM because although I don't know much at all about LVM, I at least know enough that it would allow me to expand the root and home partitions once I get the 2nd hard drive working correctly. So now that I've gotten the 2nd disk working okay, I've created two physical volumes on the 2nd drive, one to expand the root partition and one to expand the home partition. So, my question is, can I expand the root an home partitions while they are mounted or should I boot into a live CD environment before I expand the partitions? If I could expand them without booting into a different environment, that would be so great as I don't want to have to drive out to the office again before Monday. BTW, I am a new openSUSE user and an ex Ubuntu user. I loved the Ubuntu forums but had to switch because I do not agree with the direction that Ubuntu is taking.
I have Fedora Core 8 installed. I would like to reinstall it so as to get back commands that have been lost. To preserve my user data that has been stored in logical volumes, what selections should I make in the installation process? Are these selections essentially the same for Fedora Core 10?
I have done a recent install of Debian squeeze on a laptop. I set up LVM with 3 LV's, one for the root filesystem, one for /home, and another for swap. I then used lvextend to increase the size of the LV's. This additional space is shown if I enter lvdisplay (shortened for clarity):
However, if I use df, it still shows the previous size. /dev/mapper/auriga-root 14G 8.0G 5.2G 61% / /dev/sda1 221M 16M 193M 8% /boot /dev/mapper/auriga-home 147G 421M 139G 1% /home
I have even tried restarting as well. I do not understand why df would still show that /home is 147GB, when I extended it to 169GB using lvextend. Similarly for the root, which was extended by 2GB from 14GB to 16GB.
I have a system where the logical volumes are not being detected on boot and would like some guidance as to how to cure this. The box is a Proliant DL385 G5p with a pair of 146 GB mirrored disks. The mirroring is done in hardware by an HP Smart Array P400 controller. The mirrored disk (/dev/cciss/c0d0) has 4 partitions: boot, root, swap and an LVM physical volume in one volume group with several logical volumes, including /var, /home and /opt.
The OS is a 64-bit RHEL 5.3 basic install with a kernel upgrade to 2.6.18-164.6.1.el5.x86_64 (to cure a problem with bonded NICs) plus quite a few extras for stuff like Oracle, EMC PowerPath and HP's Proliant Support Pack. The basic install is OK and the box can still be rebooted OK after the kernel upgrade. However, after the other stuff goes on it fails to reboot.
The problem is that the boot fails during file system check of the logical volume file systems but the failure is due to these volumes not being found. Specifically the boot goes through the following steps:
Red Hat nash version 220.127.116.11 starting setting clock starting udev loading default keymap (us) setting hostname No devices found <--- suspicious? Setting up Logical Volume Management:
fsck.ext3 checks then fail with messages: No such file or directory while trying to open /dev/<volume group>/<logical volume> There are also messages about not being able to find the superblock but this is clearly due to the device itself not being found. If I boot from a rescue CD all of the logical volumes are present, with correct sizes; dmsetup shows them all to be active and I can access the files within. Fdisk also shows all the partitions to be OK and of the right type. I am therefore very sure that there is nothing wrong with the disk or logical volumes....
I have let the debian installer set up with separate partions forrootusrvarhometmpIt ended up with a huge home partition and little place for the others.So I wanted to give some of home's space to the others and didlvreduce on homelvextend on the others.Following some info on the net it tells you toe2fsck -f partition1 followed by aresize2fs partition1But when I try to fsck the reduced home partition I got the following error:The filesystem size (according to the superblock) is 73113600 blocksThe physical size of the device is 20447332 blocksEither the superblock or the partition table is likely to be corrupt!Abort? yesIs there any way to save this?
I would like to ask if is it possible to boot Slackware with the installation CD when in a pinch with a system on logical volumes? For the usual fdisk partitions the procedure is known:
Code: boot: root=/dev/sda1 noinitrd ro or something like that. This way, the system boots with mounted basic partitions. My question is whether there is an option to achieve the same if the system is installed on logical volumes? I need to do this on a machine with dual booting Windows + Linux. The Windows needs to be reinstalled, but as is well known, the boot sector will then be overwritten. So after the Windows reinstallation I will need to boot Slackware with the installation CD and run lilo.
Does everybody do major upgrades in place on production servers?Am I over-engineering by creating a new logical volume, syncing my working root volume to it and upgrading the new volume to test services? Then, after a week or 2 or 4, removing the old LV...
I inherited a 3ware 9550SX running a version of gentoo with a2.6.28.something kernel. I started over with CentOS 5.6 x86_64.tw_cli informs me that the 9-disk RAID 5 is healthy.The previous admin used lvm (?) to carve up the RAID into a zilliontiny pieces and one big piece. My main interest is the big piece.Some of the small pieces refused to mount until I installed theCentOS plus kernel (they are reiserfs).The remainder seem to be ext3; however, they are not mounted at boot("refusing activation"). lvs tells me they are not active. If I try tomake one active, for example:root> lvchange -ay vg01/usrI get:Refusing activation of partial LV usr. Use --partial to override.If I use --partial, I get:Partial mode. Incomplete logical volumes will be processed.and then I can then mount the partition, but not everything seems tobe there.
Some of the directory entries look like this:?--------- ? ? ? ? ? logfilesIs it possible that the versions of the kernel and lvm that wereon the gentoo system are causing grief for an older kernel (andpossibly older lvm) on CentOS 5.6 and that I might have greaterfortunes with CentOS 6.x ?Or am I missing something fundamental? This is my first experiencewith lvm, so it's more than a little probable.
and I'm dumped into recovery mode. However, if I remove these mounts from /etc/fstab via comments, I can wait for the system to boot (which it does very quickly) then mount the mapper devices myself. So what is going on? Has something changed wrt logical volumes, or is this just systemd? I can live with manual mounting, but any advice on resolving the automatic mounting situation would be great.
i have a fedora 11 server which can't access the ext4 partitions on lvm logical volumes on a raid array during boot-up. the problem manifested itself after a failed preupgrade to fedora 12; however, i think the attempt at upgrading to fc12 might not have anything to do with the problem, since i last rebooted the server over 250 days ago (sometime soon after the last fedora 11 kernel update). prior to the last reboot, i had successfully rebooted many times (usually after kernel updates) without any problems. i'm pretty sure the fc12 upgrade attempt didn't touch any of the existing files, since it hung on the dependency checking of the fc12 packages. when i try to reboot into my existing fedora 11 installation, though, i get the following screen: (click for full size) a description of the server filesystem (partitions may be different sizes now due to the growing of logical volumes):
- 250GB system drive 250MB/dev/sdh1/bootext3 lvm partition rest of driveVolGroup_System 10240VolGroup_System-LogVol_root/ext4
except he's talking about fake raid and dmraid, whereas my raid is linux software raid using mdadm. this machine is a headless server which acts as my home file, mail, and web server. it also runs mythtv with four hd tuners. i connect remotely to the server using nx or vnc to run applications directly on the server. i also run an xp professional desktop in a qemu virtual machine on the server for times when i need to use windows. so needless to say, it's a major inconvenience to have the machine down.
I installed opensuse 11.2 some 6 months ago as an alternative to windows 7, on a 44GB partition. Having become my primary OS, I am looking forward to expand the ext4 partition from 44GB to the maximum possible. I have some 24GB unpartitioned space, and free space on NTSF partitions (one of which could be deleted if necessary). What is the best and safest procedure to perform the partitioning.
Is there a limit to the number of partitions/logical volumes you can create using the partman-auto recipes? If not, any thoughts on why my preseed using the values included below results in only a /boot partition and logical volumes root, swap, and user? Is there another way to achieve putting /, /tmp, /var, /usr, /usr_local /opt, etc on their own logical volumes with preseeding?
want to install 11.2 version. my machine config is as belows. pentium 4 with 1.8 gz, 512 ram and 15 gb hard disk. i want to know what should be the partition size specially for swap, root ,home etc.and what version i.e genome or kde should i install.
After something happened in SUSE Studio, in any appliances I build the owner of /home/tux/Desktop is root which makes impossible to create desktop icons. This happens even in those appliances which previously were build normally with normal ownership (i.e. tux as owner of /home/tux/Desktop). Something changed abruptly and in all these appliances the ownership of this folder changed.
Although I've been dabbling for a while I'm somewhat of a newbie so bear with me: Rather than rebuild my Hardy Server due to root being full, I followed suggestions to create another logical volume on the volume group and put root there. I must have missed some fundamental step. Although the partition appears to be functional, the defined space isn't visible and I am stumped:
$sudo df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg--server1-lv--root 4582064 4533388 0 100% / varrun 1895524 248 1895276 1% /var/run varlock 1895524 0 1895524 0% /var/lock
After fixing drive partition numbers, I got the following error from cfdisk: Code: FATAL ERROR: Bad logical partition 6: enlarged logical partitions overlap Press any key to exit cfdisk However, I can see all my partitions with fdisk and gparted, I can mount and use all of them.I used the following guide to fix the drive numbers order: Reorder partition drive numbers in linux | LinkedBits Does somebody know whet is cfdisks problem and how can I fix it?
I've had everything but /boot on LVM LUKS encryption since I installed 11.4 on my netbook. Suddenly it won't accept my password and boot. Nothing had been updated since the last successful boot. The only possibly different thing that occurred was that I had plugged in my Android phone to charge before it booted up. Anyway, the specific error it gives when I enter the password (and I'm absolutely sure it's the correct password):
Code: No key available with this passphrase. Here is everything else on the screen: Code: doing fast boot Creating device nodes with udev [number (not sure if relevant/unique)] fb:conflicting fb hw usage inteldrmfb vs VESA VGA - removing gen Volume group "system" not found
I'm looking for a way of mounting an encrypted volume - home folder or a separate mount point, using only the standard login authentication (ie KDM or ssh). I thought the pam_mount module provided this, but I still get prompted for a password on the console at boot time. This is inconvenient as both my main desktops are headless HTPCs. I want the login credentials to be passed through, at log in time. I'm guessing this is possible, but to be honest, encryption is one thing in Linux that still completely confuses me.
I have a raid 5 with 5 disks, I had a disk failure which made my raid go down, after some struggle I got the raid5 up again and the faulty disk was replaced and rebuilt itself. After the disk rebuilt itself I tried doing a pvscan but could not find my /dev/md0. I followed some steps on the net to recreate the pv using the same uuid then restored the vg(storage) using a backup file. This all went fine.I can now see the PV, VG(storage) and LV's but when I try to mount it, I get a error "wrong fs type" I know that the lv's are reiserfs filesystems, so I did a reiserfsck on /dev/storage/software, this gives me the following error:reiserfs_open: the reiserfs superblock cannot be foundNow next step would be to rebuild then superblock, but I'm afraid that I might have configured something wrong on my raid or LVM and by overwriting the superblock I might not be able to go back and fix it once I've figured out what I didn't configure correctly.