So I just had a quick question on logical volumes and such with ubuntu. I've been looking into setting up a storage array of 4 2tb hard drives for media storage in my house but have ran into a wall with with sort of array i should use, whether it be setting up a full raid system (raid 5 or 10 most likely) or using LVM for stripping. The one thing with LVM however, is that there is no parity or redundancy built into it in case 1 hard drive fails. I was wondering if it was possible to create something similar to that of LVM stripping, but instead the logical volume is sorted into whole individual files, not stripping them across the array. That way, if one drive fails, sure, i lose the contents of the one drive, but the rest of the content isn't lost and I have no loss of space because there is no inherent parity.
i've been an intermediate-level linux user for some years, and now i'm supposed to set up and manage a small network for our research group. we have 5 linux boxes with ubuntu 10.04 (actually i'm installing everything we need in one box and will attempt to clone the others with clonezilla). i've 2 questions:
1) i'd like to manage all user accounts from one pc (server? so to say), so that every user can log in any of the 5 machines with the same passwd etc. what is the best/easiest/most stable application/protocol to manage this?
2) is it possible to create a network logical volume, based on the individual HDs and visible to all boxes? something like a RAID0 over ethernet?
I'm rearranging a bunch of disks on my server at home and I find myself in the position of wanting to move a bunch of LVM logical volumes to another volume group. Is there a simple way to do this? I saw mention of a cplv command but this seems to be either old or not something that was ever available for linux.
I have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
so i have f12 installed on my hd with lvm using the whole extent of the HD , i want to reduce it so i can dual boot it with a windows system, i managed to reduce the logical volume to free some space, but i cant seem to reduce the physical volume, is this possible and how ?
What is a logical volume? Why should we have them? Is there no need for grub on such a system? All kinds of problems since I tried to install Slackware!
To manage it more easy, I tied 2 harddisk in LVM. And I made an logical volume. It used ext4 for it's filesystem.
Today, I wanted to format and reinstall the system. So I booted the system using Ubuntu CD. But managing the partition, I accidently delete the logical volume. Because backup(/etc/lvm) was in itself, I couldn't restore the old config. I just create new logical volume.
As I expected, I couldn't mount it correctly. Mount said that "Mount: Mouting failed A on B! Invalid argument!"
I must recover it, because it has a lot of import data. What should I do?
I have a Fedora 8 system that uses LVM on one of it's drives (/dev/sdb2). One of the logical volumes is getting full (LogVol02). There is an unused, unmounted logical volume (LogVol03) available. I can see two possible options.
1) Mount the unused logical volume (LogVol03) on a new mount point (/home2) and create more space there 2) Delete the LogVol03 logical volume and extend the nearly full volume (LogVol02) into the now available space.
Option 2 seems like the better approach, since it will seem seamless to the system users. I'm looking for suggestions on how I should go about doing this and what I need to look out for. Is it better to use the command line tools (lvm ...) or the GUI (system-config-lvm) to do this?
I'm currently on Ubuntu Server 11.04 x86_64 and have configured a logical volume that contains 2x 2TB HDD's and mounted that volume in /data. The OS is installed to the first HDD (a 500GB one). So the system has 3 HDD's in total (1x 500GB OS disk and 2x 2TB data disks).I want to do a fresh install of Ubuntu Desktop on the system without losing the data in the 4TB logical volume currently mounted in /data. Is this possible and if so, how?
HPDL385 with dual raid controllers (8 disks each). During the install of the ISO, it sees the raid controllers individually. I tried "One Generic Drive" but it still only partitions one of the raid controllers.Is it possible after the OS is installed to configure the space as one logical volume?
My computer: (Lenovo T61 Thinkpad, running fc11 for about 2 and half months). Apparently I when I made my partitions I didn't leave quite enough room in my root directory, because I just completely ran out. Here is how my hard drive is partitioned:
The root had about 15 gigs on it, which just filled up. When I restarted to see if that would help, when it rebooted it went fine up to the log-in screen. Instead of the usual fedora blue background, it was black except for the log-in window, which looked very low-res. A little pop-up kept coming up saying the GNOME power configuration settings failed to load or something. When I logged in, the whole screen was black except for the mouse, and I could get no response. I have plenty of space left in home, so I rebooted to rescue mode using the first fedora installation disk, and tried the following command:
Code:
lvreduce -L90G /dev/mapper/DRIVE
which only returned:
Code:
lvreduce: relocation error: lvreduce: symbol dm_tree_node_size_changed, version Base not defined in file libdevmapper.so.1.02 So I couldn't reduce the size of home, and thus couldn't increase the size of root.
IN SUMMARY:
a) the lack of memory in root the probable cause for my computer not working
b) there a good way to reduce home and increase root while running this live disk
Note: When I am looking at it now in the logical volume manager, it says that on the whole physical volume there is only 400MB free. However, when I last looked (about 30 mins before I started having problems) it said there were about 100 Gb free.
Edit: Nevermind. I did some more research and it turned out to be more of a gnome power manager thing rather than a memory space thing, although I'm certainly going to increase my root memory now.
I've got a big problem. Earlier this afternoon I tried to unlock my screen, but the password dialog didn't appear (the background did, and I could move the pointer, but no dialog). So I restarted the computer, only for the Fedora bootup icon to get about 3/4 of the way full before the screen blanked out and I got the message "Boot has failed. Sleeping forever." I booted into the liveCD and opened the system installer to see if maybe I could just reinstall the system in place while leaving my data intact. When I got to the partitioning stage, my old partition layout was there...except one LVM volume group was totally missing. And this is the volume group that contained my / and /home, among other things. Another volume group sitting on a different RAID was still there, but ironically it was the one for short-term data.
I have three hard drives, using soft RAID and LVM. Each drive is split into 4 partitions. The first partition of each is part of a RAID-1 where /boot sits. The second of each makes up a RAID-5 on which sits my "Main" volume group for my important data (this is the one that has gone AWOL). The third of each makes up a RAID-0 on which my "Volatile" volume group sits (for /tmp, /var/tmp, and the like). The fourth is swap.
Is there any chance I can restore my volume group so my data can be recovered? I'm not sure if I've got the full layout with volume sizes written down anywhere.
There are 2 volumes on single group. The boot partion is a physical volume and the system is a logical volume. The disk has more room up to 40GB. How can I extend the logical volume. Tried system-config-lvm, but it does not gives the option.
Hi. We have a cluster consisting of 10 logical volumes all part of one filesystem. Is there a way to know which logical volume owns a certain file/inode? I have tried what is suggested at this link, but the output is the filesystem and not a specific logical volume.
I just read and learned about logical volume management today. I have a server running RHEL5.4, LVM2. I have 1 physical volume, with one volume group, and 3 logical volumes. I have no free extents, nor do I have any in my volume group (not sure if it's possible to have free in one and not the other anyway), and I am running out of space on one of my logical volumes. Doing a df -h shows 96% of 9.7GB used on /dev/mapper/MainVG-root, mounted at /. So here's the stupid question: how can I find out what directories/files are taking up what space within this logical volume? As I said I have 3 all together, and the other 2 are mapped to /var and a /var pgsql sub-directory. I figured I could get the sizes of the other directories under / and drill down accordingly, but I seem to be missing some basic rule because the commands I am using and the values I am getting don't add up.
For example, it seemed logical to me to do an ls -lsh on / to try and identify the largest directories. Each directory is listed as being ~4-8K in size. That doesn't make sense to me. So I decided to do a du -sh on each directory. Having done this on all of the / sub-directories and added up those values, there is not enough reported usage here to equal 8.9GB of used space (as df -h / reports).how they would find out how the 9.7GB here is being allocated? Preferably without scripts as I am not ready to add a layer of complexity to this yet without understanding some fundamentals.
I'm wondering if there is a way to shrink an ext3 LV mount as / .I tried to with resize2fs ... but seems that isn't possible if the partition is mounted.
I have a huge RAID6 array 21TB+ already partitioned in GPT. This is to be used as the storage location for my company's backup server, and I want to access it as one logical volume. Is this possible with Centos5? I just discovered the product specifications for Centos5, and saw that the maximum file system size is 16TB, but LVM2 should support volumes up to 1EB. Is there any way I can make this work in Centos, or am I going to have to run a different distro?
I (again) ran into trouble since two months i was trying out fedora lovelock "nightly build" in a virtualbox my host is Ubuntu 11.04 (previously 10.10 i upgraded yesterday) (had worked fine .. just was slow because i alloted only 512 MB ram) ... now i uninstalled virtualbox and deleted .virtualbox in home folder, but i couldn't recover my 8 GB space when i installed i made virtual disk fixed storage (my mistake i guess!! ) ) i don't know much about it either .. just wanted to try out fedora 15 so experimented it using virtuallbox ...
i'm running out of disk space (don't have bucks to purchase external hard disk .. otherwise i wouldn't have cared .. also don't have time to do a reinstallation of ubuntu/some other os (may be fedora) on my laptop ...
i just got 24 GB left .. i intend to store a few more movies and songs .. i would run out of disk space fairly quickly ... 8GB would be buffer if i get it back ...
I have a raid 5 with 5 disks, I had a disk failure which made my raid go down, after some struggle I got the raid5 up again and the faulty disk was replaced and rebuilt itself. After the disk rebuilt itself I tried doing a pvscan but could not find my /dev/md0. I followed some steps on the net to recreate the pv using the same uuid then restored the vg(storage) using a backup file. This all went fine.I can now see the PV, VG(storage) and LV's but when I try to mount it, I get a error "wrong fs type" I know that the lv's are reiserfs filesystems, so I did a reiserfsck on /dev/storage/software, this gives me the following error:reiserfs_open: the reiserfs superblock cannot be foundNow next step would be to rebuild then superblock, but I'm afraid that I might have configured something wrong on my raid or LVM and by overwriting the superblock I might not be able to go back and fix it once I've figured out what I didn't configure correctly.
I recently resized one of my Logical Volumes that contained 160MB data from 500MB to 6.5GB. After resizing it, I checked the size of the data via 'du -sh' and found that my data had reduced to 143MB.
Fortunately, I backed up the the 160MB of data on another partition before resizing the Logical Volume. I ran 'diff' on both directories holding the 160MB and 143MB, but there was no difference detected.
how come there is a 17MB difference after resizing?
In case you're wondering how I performed my resize, this is what i did:
Back in the day, I foolishly installed my Fedora server with the default logical volume layout on one physical volume. Knowing now that this is a huge waste of space (partition is large) I'd like to reduce the logical volume and somehow detach this now unused space and mount as a normal partition. Is this possible? Only 20GB of the 160GB has been used for the OS. Home partition is on a secondary disk.
I have a CentOS 5 server with a 250gb hard drive running close to the maximum space on one of the partitions. 87% of 200gb (/home), roughly. I have a second 250gb hard drive which is completely unused. I just recently did some searching through the forums here and found out about LVM and wanted to implement it. Although the downside is I believe it has to wipe a drive/partition before it can add it as a logical volume.
The process I'm considering following is: 1. add this empty 250gb (SDB) secondary hard drive as a logical volume on LVM and copy everything over from the currently filling up partition on my main hard drive. 2. have LVM add in the old partition on the primary hard drive (SDA) 3. extend my logical volume out to include the old partition. Extending my total hard drive space out to 450gb.
I have a 7.9 TB logical volume I've created from 8 1 TB RAID 0 devices. The volume is formatted with XFS so I can resize when ready. However, I think I want to do something that is not possible. I have 2.5 TB free on my logical volume. I'd like to shrink the volume down to be 6 TB by getting rid of 2 of the 1 TB devices in the physical volume. However pvmove seems to require free extents in order to work. Do I need to add 6 TB of storage, pvmove everything onto it, and then decommission the original 8 1 TB physical devices from the volume?
I encountered problem on my NEW PROD box. I have remaining space of 300GB and i decided the create a /dev/mapper/VolGroup00 using Redhat Gui. It is successful. Then, i decided to create logical volumes out of it..
I have a network (192.168.x.x) that I want to keep closed and private for the most part. I need however to get access to some files generated on the machines in this private network. So I first tried putting two cards in a machine running centos 5.2 and connecting one to the private newtork and the other to the public network. This worked somewhat but I was not able to see this bridging machine in the private network because I could not run 2 samba instances on this machine ( I need one for the public network). So I setup xen on a machine with the 2 NIC's and assigned one card to the host dom and the other to the guest dom which was connected to the private network.
This worked ok, but the only issue was the shared disk space. I couldn't use nfs because each machine operates in a different subnet and I don't know how to export a nfs drive across domains. So I created a logical volume on a disk and mounted this in both domains.
Here comes the question now. This works some times,. but at other times I copy files from the private machine to the shared volume but i can't see them from the other domain. Also sometimes the guest domain which houses the private network server hangs during boot up saying that the logical volume has been assigned and cannot be mounted.
1) Is what I'm doing using logical volumes across domains legal (best practise, etc)
2) Is there another way for me to achieve what I want (sharing a disk partition across domains).
I have a network (192.168.x.x) that I want to keep closed and private for the most part. I need however to get access to some files generated on the machines in this private network. So I first tried putting two cards in a machine running centos 5.2 and connecting one to the private newtork and the other to the public network. This worked somewhat but I was not able to see this bridging machine in the private network because I could not run 2 samba instances on this machine ( I need one for the public network). So I setup xen on a machine with the 2 NIC's and assigned one card to the host dom and the other to the guest dom which was connected to the private network.
This worked ok, but the only issue was the shared disk space. I couldn't use nfs because each machine operates in a different subnet and I don't know how to export a nfs drive across domains. So I created a logical volume on a disk and mounted this in both domains. This works some times,. but at other times I copy files from the private machine to the shared volume but i can't see them from the other domain. Also sometimes the guest domain which houses the private network server hangs during boot up saying that the logical volume has been assigned and cannot be mounted.
1) Is what I'm doing using logical volumes across domains legal (best practise, etc)
2) Is there another way for me to achieve what I want (sharing a disk partition across domains).
I'm sure many of you here have worked with disk quotas and lvm2 and my problem involves both. Basically what I'm wanting to do is have it so whenever a logical volume gets below a certain constraint (10Gb's) ie. it only has that much left - I want to automatically resize it to add 20 GB's. Obviously this can be done rather easily manually, and with a bit of python hacking it can be done programmatically but since this is for production use I was wondering if there was something a bit more fluid. Since this server is I/O intensive ZFS implemented via FUSE is not an option and neither is the still unstable BtrFS.
I ran yum update on my centos 5.6 box a couple of days ago and following this the system would not reboot, dont recall the exact error and don't seem to be able to find it logged anywhere but it was something to do with LVM not being able to find a disk.
In the end I have booted to linux rescue and edited my /etc/fstab file so the system does not try to mount the offending volume group. This enables the system to boot but I need to find out what is wrong with the system and get this volume group accessible again. Here is my edited fstab showing the commented out line. code...