CentOS 5 :: Grow Volume/filesystem Without Reboot?
Oct 13, 2009
I'm looking for insite on how it might be possible to grow an existingvolume/partition/filesystem while it's in active use, and without having to add additional luns/partitions to do it.For example the best way I can find to do itcurrently, and am using this in production, is you have a system using LVM managing a connected LUN (iSCSI/FC/etc), with a single partition/filesystem residing on it.To grow this filesystem (while it's active) you have to add a new LUN to the existing volume group, and then expand the filesystem. To date I have not found a way to expand a filesystem that is hosted by a single LUN.
For system context, I'm running a 150 TB SAN that has over 300 spindles, to which about 50 servers are connected. It is an equal mix of Linux, Windows, and VMware hosts connected via both FC & iSCSI... With both Windows & VMware, the aforementioned task of expanding a single LUN and having the filesystem expanded is barely a 1 minute operation that "Just Works".If you can find me a sweet way to seamlessly expand a LUN and have a Linuxfilesystem expanded (without reboot/unmount/etc)I have cycles to test out any suggested methods/techniques, and am more than happy to report the results for anyone else interested. I think this is a subject that many people would like to find that magic method to make all our lives much easier
I installed CentOS 5.5. After install, I decided to put 3 identical disk for raid 5. All the disks are IDE disk. Then I put a sata disk and partitioned it to add another partition to the raid 5 array. Everything works fine until I rebooted my system. After reboot, the sata partition I added into raid 5 is showing removed. I had to readd it using "mdadm --add" to make raid 5 array works.
I have created two logic disks /dev/VolGroup00/LogVol00 and /dev/VolGroup02/LogVol02 After reboot has disappeared /dev/VolGroup02/LogVol02 How to restore logic volume?
I have configured a "Syslog" server on /var directory as a separate ext3 partition - to receive the logs and events from the clients & the firewall as well. The directory needs to grow dynamically as the logs are populated. Is there a way i can make the filesystem grow dynamically as and when the directory is full.
Is growing raid 6 in 5.3 centos possible? I'm getting errors when i run mdadm in --grow mode failed : 'mdadm --grow /dev/md0 -n 5 2>&1' -> mdadm: Cannot set device size/shape for /dev/md0: Invalid argument Do i have to create a custom kernel for centos?
I have installed debian recently and not able to mount any other volume except FileSystem. It says -You are not privileged to mount this volume.I have tried everything including raising the permissions of the user and changing the group to root but in vain.??
I have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
I'm getting an error message that something along the lines of "volume "filesystem root" has only 25mb space remaining". How do I increase the volume size so I never have to worry about it again? This is the 3rd time I've tried ubuntu and it's sticking more and more but this has me thoroughly perplexed. I've got a 320GB HDD partitioned 3 times with a Linux partition being 7GB.
I'm running SUSE 11.3 with gnome (and pulseaudio, which I tried to get rid of, but for me it's just too much hassle to get audio working w/o pulse). The Master volume of my USB headset (2nd soundcard) is reset to 0 after each boot. In order to get it properly working, I have to run the YaST sound module and re-set the Master volume (which always is down to 0). Using alsamixer to do the same doesn't work (alsamixer shows no controls), probably because pulse grabs the device or does something else with it?
Pulseaudio version: 0.9.21-10.1.1.i586 Alsa version: 1.0.23-2.12.i586 Any input on what I can do to make the volume level stick?
I know if I do a shutdown -rF now, it will perform an e2fsck on all my volumes with the -y switch. But if I just want to check one of the volumes rather than all of them, and have it use the -y switch so it will automatically answer yes to everything, how can I do that?I'm using RHEL, and have a huge volume I need to run a check on, and I dont want to sit there for the next 24 hours hitting the Y key every time it finds a problem ;-)
An error occurred during the filesystem check. Dropping you to the shell; the system will reboot when you leave the shell. Warning -- SELinux is active Disabling security enforcement for system recovery Run 'setenforce 1' to reenable
It's been days that I've been with this and I'm getting to the end of my wits.
I tried to create a RAID-5 volume with three identical 1tb drives. I did so, but I couldn't mount the new volume after a reboot. Mdadm --detail told me that of the three drives, two were in use with one as a hot spare. This isn't what I wanted.
So I deleted the volume with the following commands:
Then I rebooted the machine, and used fdisk to delete the linux raid partitions and re-write an empty partition table on each of the drives. I rebooted again.
Now I'm trying to start over. I purged mdadm, removed the /etc/mdadm/mdadm.conf file, and now I'm back at square one.
Now for whatever reason I can't change the partition tables on the drives (/dev/sdb /dev/sdc /dev/sdd). When I try to make a new ext3 filesystem on one of the drives, I get this error:
"/dev/sdb1 is apparently in use by the sytem; will not make a filesystem here!"
I think that the system still thinks that mdadm still has some weird control of my drives and won't release them. Never mind that all I want to do even now is just make a RAID-5 volume. I've never had such difficulty before.
I am trying to create a link to my windows xp workgroup where all my data is stored (I was surprised that linux could even see it!) I mounted a volume on the desktop apparently... that worked fine until I rebooted and it had disappeared. it was fairly annoying that I had to go back into the network and re-mount the volume. How can I get it to stay put, even after rebooting?
The volume keyboard shortcuts on my Asus Eee 1008p resets on reboot.(going back to no shortcut at all). It works for the session, if i set it, but after reboot i have to set it again.
I added another disk in server and create mount point in fstab: /dev/VolGroup00/LogVol00 /opt ext3 defaults 1 2 Everything is working perfect... halt, boot, system... but when I wanna to reboot with a command sudo reboot, it hangs at the end of all initialization when it's rebooting and some number. If I remove disk in fstab, than reboot working.
I encountered problem on my NEW PROD box. I have remaining space of 300GB and i decided the create a /dev/mapper/VolGroup00 using Redhat Gui. It is successful. Then, i decided to create logical volumes out of it..
I am fairly new to linux and my laptop froze so I rebooted it and now I have this error. I have tried to load the files needed using the prompt but nothing seems to work so here is my bootinfoscript thing. Any way for me to reinstall ubuntu over the sda5 partition but keep my files?
Code: Boot Info Script 0.55 dated February 15th, 2010 = Boot Info Summary: = => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #7 for /boot/grub. => Syslinux is installed in the MBR of /dev/sdb .....
I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.
I am trying to connect the one of server RHEL5.4 to the IBM iSCSI storage. Server is equipped with 2 single port Qlogic iSCSI HBA(TOE). RHEL detected the HBA and installed driver itself (qla3XXX). I have configured the HBA ip address in the range of iSCSI host port of storage. Both of the HBA is connecting to the two different controller of storage. I have discovered the storage using command iscsiadm -m discovery command for both of the controller and it went through fine. But problem is whenever server is restarting if both of the hba is connected to the storage then server will not detect the volumes which is mapped to the server and then to detect the volume I need to run "mppBusRescan" and "vgscan" command each time. If only one path is connected it is fine.
fscheck is quite annoying, since it usually occurs when I reboot my system performing administration tasks. do I actually need fscheck if Im using the ext3 file system? if not how would I extend the period between checks or just turn it off altogether?
I have some large volumes that I don't want to automatically be e2fsck'd when I reboot the server. Is it safe to change maximum mount count to -1 and check interval to 0 while a volume is mounted, or will that cause problems to the file system?
I installed fedora 13 64 bit and it works great but I encountered several issues when setting up guest OS with KVM. The problem seems to be related to selinux. But let me first ask question about logical volume. By Default fedora created logical volumes:
[Code].....
"If you expect that you or other users will store data on the system, create a separate partition for the /home directory within a volume group. With a separate /home partition, you may upgrade or reinstall Fedora without erasing user data files." seems to suggest I have to create a separate physical partition and assign that to /home. But reading elsewhere it seems to suggest logical volume acts like a partition. My goal is to make it easy in case fedora is hosed and I have to re-install it without affecting /home where my cirtical data resides. Given above do I need to create a separate physical partition or I am just fine?
I have a second hard disk that originally had windows and all my data. Windows is hosed but I can see my data from within Fedora and Windows is gone and I created created new partition in its place which used ot be the C:/ drive appears as 53 Gb filesystem. My data which was originally D drive appears as 215 GB filesystem. As given in [URL] I want to create a new logical volume in 53 Gb filesystem which I want to use as space for virtual disk to install guest OS's in KVM. Currrently 53 GB filesystem is mounted as /media/3467BH89JK789 but this does not work well with KVM. how do I create this logical volume out of 53 Gb filesystem partition and add proper selinux info and do I add to vg_vostrolx volume group and in a different volume group?
Summary of issue: EXT4 filesystem won't mount--with error = mount: unknown filesystem type 'ext4'. Is no ext4 in kernel the issue? Or is something corrupted?Really perplexed by this. I updated Centos 5.5 to 5.6 to get ext4 (5.6 is supposed to have full support of ext4). I built several arrays and put the ext4 filesystem on them. All went well until I tried to mount them. BTW, this array (below) is set up as a RAID6 using partition 1 of #8 2TB drives.Bear with me here; just trying to be complete and not waste your time.
Attempting to mount give this:[root]# mount -v /dev/md1 /asc/array1mount: unknown filesystem type 'ext4'Note: it does "fake" mount with ption (which apparently does everything except the system call):[root]# mount -f -v /dev/md1/dev/md1 on /asc/array1 type ext4 (rw,grpquota,usrquote)e2fsprogs:Package e2fsprogs-1.39-23.el5_5.1.x86_64 already installed and latest version (for Centos 5.6; CentOS 6x uses the 1.41...)
I am an old days RH release user(from 6.x) and just switching back from Debian/Ubuntu to CentOS on some servers, but I can not understand the kernel update strategy currently enabled in CentOS.There are two boxes, with almost identical installation, but recently there was an auto update of kernel on one box. This auto update also seems to issue an auto reboot on the machine, which is unacceptable on server machines.
I am very new to linux, and I have a question regarding the filesystem check (fsck). The power recently went out and when I tried to restart linux the following error appears:
*/dev/sda1 contains file system w/errors, check forced it then goes on to say..
*An error occured during the file system check. Dropping you to a shell; the system will reboot when you leave the shell. Give root password for maintenance (or type Control-D to continue) I wasn't sure what to do, but checked some other online forums and they suggested running fsck manually - so I typed in the root password - and used the command, "fsck -A -V ; echo == $? ==" it then gave the following message
*WARNING!!! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage *Would you like to continue (y/n)
Again, I wasn't sure what to do so i just checked no. I then manually turned off the computer and was prompted at the beginning to press Alt-3. I was brought to another screen and it informed me one of the drives was degraded and suggested rebuilding the array. I tried doing this, but it still brings me back to the original error of, "/dev/sda1 contains file system w/errors, check forced," and the process continues.
Also, when I tried to rebuild the array, I didn't backup any of the data on our home directory before doing this (which was probably a big mistake). After being prompted to type the root password, I was able to give the ls command and look at all the directories...the home directory where our data was stored was empty and I am afraid I may have lost some information. Is there a possibility that data was lost when I was trying to rebuild using the old drives?
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3 debian squeeze x64 installed: root@xen2:~# uname -a Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
i want to ask is it safe to ad more space to my root partition with gparted ?I ask friends and they all told me if i change the root partition is possible to have problems to start my Debian.
I find myself in a position to go full time, on at least one computer in my home, to Ubuntu. On a side note, I'm not overly thrilled with the new Unity, but I'm certain that before 11.10 comes about, most of what irritates me about it will be fixed.
My problem is now not really involving Ubuntu, but to commit this computer to full time Ubuntu, I've decided to remove the Windows Partition completely. (I can access what I need through VirtualBox. I don't think anyone really ever gets COMPLETELY away from Windows, but that's a different subject.) Anyway, here is a picture of my partition table.
What I'd like to do is completely remove the windows partitions, which are obviously the first two on here, and then extend my Ubuntu partition to the left to fill in what will then become unallocated space. However, when I try this, I don't have the option to grow my Ubuntu partition to the left. How DO I do that? I know I've heard of it being done before, and I can't be the first to have run into this situation.
I have a machine running Ubuntu Server 9.10 installed on an 80GB RAID1 disk. The system has two arrays (one data, the other backup), each of the same size in RAID6 with ext4 fs, connected to separate 3ware 9690 controller cards. I had to increase the size of the arrays from 8TB to 12TB. No problems - added the drives, migrated the new disks into the array, rebooted the server, and everything is visible. I unmounted the drives and then attempted to grow the partition (it's a single partition), starting with the backup array, using gparted. It sees the unallocated space but when I try to grow the partition into the unallocated space it fails. Here's the gparted error details: