Server :: Differentiating Volume Groups - New RAID
Jul 1, 2009
I'm experimenting on a new 5.7TB raid we got for one of our servers before it goes into production. I'm carving the space up into Volume Groups and Logical Volumes. Below is some sample output:
[root@server newhome]# vgdisplay
--- Volume group ---
VG Name extraid_sdd1
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TB
PE Size 4.00 MB
Total PE 476804
Alloc PE / Size 476804 / 1.82 TB
Free PE / Size 0 / 0
VG UUID LJPJVE-fekS-crS8-uugk-l13z-0NG0-FWv3M3
--- Volume group --
VG Name extraid_sdb1
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.64 TB
PE Size 4.00 MB
Total PE 953608
Alloc PE / Size 953608 / 3.64 TB
Free PE / Size 0 / 0
VG UUID kzlLN4-PyrX-LYUS-h1Tc-1S9F-jVV0-XU5tcK
Because I created this, I know that the second 3.64tb Volume Group, extraid_sdb1, is composed of two physical volumes, /dev/sdb1 and /dev/sdc1, each one 1.82TB in size. My question is, if I hadn't made this and had to work backward, how could I discover that info? I can see that the second VG is composed of 2 PVs by the "Cur PV" line. But if I didn't know that they are my /dev/sdb1 and /dev/sdc1, how could I break that out, as well as their sizes? If it matters, this system is running FC6.
Adding a kernel parameter to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nodmraid" (not 100% it should go there grub2 is new to me, but that is another story) I noticed the following error when running update-grub.
ERROR: ddf1: wrong # of devices in RAID set "ddf1_Series1" [1/2] on /dev/sdb No volume groups found
Now I have not a clue where it is getting ddf1_Series1 from.sdf1 is part of a RAID1 group that has mdadm RAID1 > luks > LVM
I've been using LVMs on some of my Linux servers for years without fully "getting" them. Doing a lot of things by rote. As I setup a new RAID though, I realize I don't have to be so rigid. I inherited a mission critical server with five independent disks
Mainly because the 1 to 1 correspondence is easy for me to understand, and what I'm used to. But I realize it doesn't have to be that way, and I could have one VG with all the LVMs as parts of it, i.e.
Is there any advantage to one way over the other? Would using one VG with multiple LVs be kind of like "putting all my eggs in one basket"? Do more VGs and LVs introduce unwanted overhead into the LV Mgr that should be frowned upon? If both methods are equal, I go with the method1. Just more clear to me. But now that I understand the second, I could go that way, if there's a compelling reason.
How to create multiple Logical Groups out of a single Physical Volume? Here is the Physical Volume I have created:
Code: # pvdisplay --- Physical volume --- PV Name /dev/sda9 VG Name myVG1 PV Size 54.88 MB / not usable 2.88 MB Allocatable yes PE Size (KByte) 4096 Total PE 13 Free PE 11 Allocated PE 2 PV UUID bon4Ao-vmgC-aP1h-EC9X-w3tN-YXNu-0N2dAw
This is how I am creating a Logical Group out of the above Physical Volume:
Code: # vgcreate myVG1 -s 4m /dev/sda9 Display:
Code: # vgdisplay --- Volume group --- VG Name myVG1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 52.00 MB PE Size 4.00 MB Total PE 13 Alloc PE / Size 2 / 8.00 MB Free PE / Size 11 / 44.00 MB VG UUID O6ljYC-bflz-EUTd-nf34-8gYe-Fh39-Bh3cOg
But I am unable to create one more Logical Group out of this Physical Volume. Can we accomplish it? Or do we always extend our current Logical Group to utilize the available space of a Physical Volume?
I believe server section is the best when speaking of RAID stuff...
I have the following situation:We have a DELL T3400 with embedded fake raid on it. I dont know exactly how the system was setup (I wasnt here at that time), but the RAID was enabled in bios and while booting, the two harddrives would be seen as members of intel raid volume0 (RAID 1 mirror). I am not sure if the software raid was actually properly configured in Linux (Fedora 9) and if the OS was reconstructing the whole raid or it was just the bios part that was mirroring the /boot or just some parts of it. Frankly I find these hydrid raids very confusing.Some bad disk manipulation from my part caused the server to crash, but I was able to recover and boot just with one hard drive after using fsck.
I decided to get rid of the raid as it's not the right solution for the application we need it for and decided to go for a traditional single harddrive system and to use Ghost for Linux to clone to a spare disk when backups are needed.So I installed the latest Fedora 12 distribution onto another harddrive and disabled RAID in bios (changed from RAID ON to autodetect, which is the only other option).
Here is what I have now: /dev/sda has the newly installed fedora 12 /dev/sdb is an empty harddrive that I would use as an intermediate /dev/sdc is the old harddrive member of intel raid volume0
sdb was partitioned into sdb1 sdb2 and sdb3 and I created an ext3 filesystem on sdb2. The hard drive belonging to RAID volume0 (sdc) has a lot of work done on it and I would like to be able to recuperate the files to the new disk (sda). I cannot mount that old harddrive while in fedora 12, as it sees some unknown raid member filesystem on it probably assigned by the intel raid chip.
So I decided to do it from the other side: to boot from raid volume 0, and from there mount a third intermediate harddrive (sdb) onto which I would copy the documents and then mount the same harddrive from the newly installed fedora 12 and copy those documents from that intermediate harddrive.I can mount /dev/sdb2 from fedora 12 fine and copy stuff to and from it, but not when I boot from the RAID volume 0 harddrive (sdc) with fedora 9 on it. It keeps saying that the partition in question (/dev/sdb2) is an invalid block device.I am stuck here, as my knowledge in this sort of things is very limited.If somebody can indicate me how to recuperate files from that old raid harddrive onto the new fedora 12 drive, I would appreciate a lot.
Dual PII 400, 512Mb with a Promise SuperTrak 100 IDE Array Controller. At present I have only one drive on the controller, configured for 1 JBOD array. I install FC9 with no problem. New partition is created and formatted, Grub is installed, and then... Grub is found and booted, but then I get:
Reading all physical volumes. This may take a while... No volume groups found Volume group "VolGroup00" not found Unable to access resume device (/dev/VolGroup00/LogVol01) mount: could not find filesystem '/dev/root' I can boot in rescue mode, chroot to the installed system. I changed the kernel boot parm "root=/dev/VolGroup00/LogVol00"
I have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
I'm new to LVM. I use Red Hat and CentOS 5. I'm setting up a database server and I want to setup the local drives for performance. My plan is to have three storage locations, 1st for Linux, 2nd for the application, and 3rd for the data files. Each location will be appropriately redundant. The OS and application drives will be local. Because my goal is to dedicate one spindle for the OS and another for the application, is there a best practice that would say I should create two LVM volume groups.
Each with one logical volume associated with one of the physical partitions or one LVM volume group with two logical volumes each associated with one of the physical partitions? I've read that a physical disk can only belong to one volume group. So if I want to add 70GB to both logical volumes, I could add a single 140GB drive to a single volume group and then add half to each logical volume. If I have two volume groups, I would need to add two additional disks. I may be missing an obvious consideration or be missing a basic concept of LVM.
I have couple hard drives from different linux distro's. on the hdd's are logical volumes created. what i want to do is to backup the data from those hdd's. but when I connect some of them and boot up the system it gives me an error that there are volume group duplicated and it list the UUID. My host system is centos 5. What is the best solution to rename those volume groups or to mount the drives from other systems, so the data will not get lost?
I am very new to LVM, as well as not especially experienced at linux, and have some questions about how lvm works. A few months back I set up a server running FC10 and tried creating Logical Groups during the the initial setup. We've realized that we are not using all the available space on the physical drive, and I realized that for some reason (I'm thinking this might have been the default?), we initially created two Logical Groups (VolGroup00 and VolGroup01) and it appears two Logical volumes in each (LogVol00 and LogVol01). LogVol00 in VolGroup00 is mapped to /, and the other Group was actually unused. I figure that it would be simplest to just use all this space mapped to /, so I thought the thing to do would be to simply merge VolGroup01 to VolGroup00. I tried this:
[root@office mapper]# vgmerge VolGroup00 VolGroup01 Logical volumes in "VolGroup01" must be inactive
So after a bit of research, I tried this:
[root@office mapper]# vgchange -a n VolGroup01 Can't deactivate volume group "VolGroup01" with 1 open logical volume(s)
So apparently There's an open volume, but I don't know how to go about closing it. I removed the LogVol00 from that group, but LogVol01 won't budge.
[root@office mapper]# lvremove VolGroup01 Can't remove open logical volume "LogVol01"
So how do I go about closing this Volume? At one point, there was some output that told me LogVol01 was being used as swap space. How do I handle that?
I just imaged my RHEL 4 system that was running on a Dell Poweredge 2950 server using Acronis software and I restored the image to a VmWare virtual machine.
Dell Poweredge 2950 - RAID 5 VmWare - using ESX 4.0 OS - Red Hat Enterprise Linux 4 ES Update 5 x64 kernel - 2.6.9-55.0.9.ELsmp
I'm getting the following when I try to boot on the new virtual machine. I'm thinking it has to do with the fact that it's new hardware and it's having trouble either finding the right drivers or pointing to the correct place.
"No Volume Groups found Volume Group "Volgroup00" not found ERROR: /bin/lvm exited abnormally! (pid505) mount: error 6 mounting ext3 mount: error 2 mounting none switchroot: mount failed: 22 umount /initrd/dev failed: 2
Kernel panic - not syncing: Attempted to kill init!" I was able to boot into Linux rescue mode using the boot CD. Then I typed: #chroot /mnt/sysimage
Here's all the info from the commands I typed: #ldd /bin/bash libtermcap.so.2 => /lib64/libtermcap.so.2 libdl.so.2 => /lib64/libdl.so.2 libc.so.6 => /lib64/tls/libc.so.6 /lib64/ld-linux-x86-64.so.2 #uname -a Linux localhost.localdomain 2.6.9-89.EL x86_64 #df -h ....
I've tried the following: 1. mkinitrd -v -f /boot/initrd-2.6.9-55.0.9.EL.img 2.6.9-55.0.9.EL 2. modified the device.map to point to /dev/sda3 3. changed the SCSI controller in Vmware to use BusLogic instead of LSI Logic. (didn't work because I'm running 64 bit.. gave me an error message) 4. grub-install --recheck /dev/sda 5. tried booting to differerent OS versions (i.e. 2.6.9-55.0.6, etc.). I tried all of the versions listed in the boot menu. None of these worked.
I made an image of a server(A). It did not work because It did not see /dev/sda. So I used another cd from indentical sever(B), cd which had work. Then mounted the nfs copy the image over. Server(B) had slightly older version of sles. And its not using lvm. But point is it had appropriate driver in the image cd.
Now situation is, This new image copied ever thing over.(I can mount it using LIVE CD). But When I boot up server it wont see vg volume group. I believe scsi driver is missing.After booting from LIVE CD I chroot in root used yast to add initrd modules, qla2xxx, mptbase, mptscsih(other server(B) had those). Still when I reboot it, it get stuck saying
Code: No volume groups found unable to find volume group waitnit fot device /dev/vg1/root to appear .... not found device nodes: ..... ...... ..... No root device found. exiting to /bin/sh sh: cant access tty; job control turned off
I'm working on a kickstart install to automate a lot of monotony doing installs. When i use the kickstart install i get an error on boot saying "File based locking initialization failed" "no volume groups found". Then it proceeds to boot up with no issues and appears to work just fine. Below is the section of my ks.cfg for creating the partitions: %include /tmp/partition.cfg is included where the partition info usually occurs. The user is prompted in the pre script for how large they would like the partitions to be and then the script below creates the included file with partition info
I have v 10.10 running the following (and I still have no idea how I got this all working). Myth, Squeeze server, X10 control system for the house, Virtualbox (where I run some windows stuff) and a general file and print service. It has 5 disk drives attached
1 - system disk 5- external usb to back up data to 2,3,4 - 2 tb drives in a raid config
I just restarted the server and the raid volume has not mounted. I looked in the webmin interface and the device is there but it doesn't seem to have the partitions attached. Again in webmin, the partitions seem to be on each of the drives. I used crash-plan to back up the data and music, but not the video, so I would prefer not to have to rebuild the lot if I don't have to. I really don't want to re build the lot out of my lack of experience with Linux. Is there an easy (ish) way of putting the raid back together to see if it's just dropped its config and the data is in tact (or can be rebuild from two of the three drives).
I've been using Fedora and Ubuntu on a standard PC/laptop (one HDD) for some time, but just as a user. Recently I built a PC with Asus P5K WS motherboard with RAID adapter on board. I've got 5 750GB SATA drives, however hardware restrictions only allows me to create Logical Volume up to 2TB. SO I've created RAID 5 on 3 drives as SysVol and RAID0 on 2 drives for data.
Ran live CD and started to install OS. However I get to point, when system scans for storage devices. It detects smaller volume, but not the SysVol, I want to install the system on. It is offering me to install system on the smaller volume. I've deleted the smaller volume and left one RAID 5 volume and 2 drives. Again the same problem. System detects those 2 drives, but not the volume.
Is there any restriction in Fedora 12 allowing installation on certain LV? Or is it more HW problem?
I am unable to hibernate my computer while using Ubuntu and I figured out the reason--Ubuntu is not using my swap partition. I would follow the existing tutorials on setting up a swap partition after installing Ubuntu, but since the volume uses hardware RAID 0, the swap partition is not assigned a /dev/ entry (like /dev/sdxx) and I am not sure how I can mount it.
I'm asking for an advice about the setup of a large volume: I have 2 disks of 1 Tb each and I want to merge them in a single volume/partition. I am in doubt about setting up a LVM, a RAID0 device or both. I know that RAID0 has no redundancy but I will manage a backup on other media, so that I can take advantage of the stripe feature in terms of I/O performance. On the other hand LVM let me to easily manage and expand the volume in a near future. Am I correct? Anyway I don't know if I can ever setup both and in which order. First LVM then RAID, I suppose.
I already know of a work around to fix this problem, but I guess my question is why is this not working as expected? I am using a Windows Server 2008 R2 Active Directory for authentication.
I have run auth-client-config for the ldap profile and pam-auth-update. When running getent passwd, I get a list of both the local users and the users in the active directory (with populated information in the Unix schema extension). When running getent group I get a list of both the local groups and the groups in the active directory (with populated information in the Unix schema extension).
Interestingly enough, though, when I run su DOMAINUSER, after the prompt for the password I get an authentication error. In /var/log/auth.log I can see an entry with pam_ldap: missing "host" in file "/etc/ldap.conf". The SRV records in the DNS servers resolve correctly. I've checked this with nslookup and I have seen the records within my zone file. Obviously if the ldap.conf file is working with getent and the ldap server is resolving from the SRV records, it is working fine.
The interesting part is that the Windows Server 2008 R2 AD machine shows in the event viewer that there was a successful authentication, yet the Ubuntu box says no. When I add the host within the ldap.conf file, everything works...getent and the actual authentication, either initial login or su.
I am currently trying to set up a Samba domain server. In the Samba-HOWTO-Collection I found an example file.(Point 188.8.131.52) In the explanations of the example below, the author says I need to map UNIX Groups to NT Groups. He writes a shell-script of how one could do it, but when I copy it and then execute it, I get the error:
Bad option: rid=512 Bad option: rid=513 Bad option: rid=514
The other groups do get mapped, just the Domain Admins, Domain Users and Domain Guests dont. This is the shell from the HOWTO:
#!/bin/bash #### Shell-Skript f ̈r sp ̈tere Verwendung aufbewahren net groupmap modify ntgroup="Domain Admins" unixgroup=ntadmins rid=512 net groupmap modify ntgroup="Domain Users" unixgroup=users rid=513 net groupmap modify ntgroup="Domain Guests" unixgroup=nobody rid=514
I have a raid 5 array formatted with ntfs; My Ubuntu OS is not able to recognize the raid 5 array but my windows 7 OS can. I had this working before when I installed ubuntu via wubi but now since i installed it as a dual boot OS I am having issues trying to mount this raid 5 volume. So far i have tried reinstalling the dmraid, ntfs config manager, and storage device manager however nothing seems to help me recognize my raid 5 array.
P5N32 E-SLI PLUS MOTHERBOARD: RAID 5 ARRAY NTFS with NVIDIA's raid chipset that comes built in with motherboard.
I have a software raid array (in this test case a mirrored set of two 500GB volumes) and I want to move them to another OS installation on the same hardware. (This is testing in preparation for a physical move two arrays onto a single server.) I had the array up and working (surviving reboots), wrote a test backup onto it in a folder.
Shut the machine down, re-installed ubuntu, got it up and running, then installed mdadm, rebooted with the array powered up and ran mdadm --detail --scan , expecting to see mdadm at least find the parts of the array. Instead, I get nothing. I even added -vv to get more verbose output. de nada.
So i am at the stage of about to install the basic system and am using a derivation of the package management provided by Matthias S. Benkmann. To this end I am using his useradd and groupadd scripts to update the files:
My issue is that when I run the commands(created as part of temporary system when installing coreutils):
/tools/bin/su linux #then as user /tools/bin/groups
(here linux is the name of the user) This only returns the user being in the group named after user but not the additional group of 'install' Also, prior to logging in as user, if I use this command as root:
linux install This then returns that the user is in the correct groups. Lines from relevant files look like:
I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.
Right now I have a OpenSuSE 11.1 server running on a single hard drive. I want to install the HighPoint RocketRAID 1740 card and utilize RAID 10.I wanted to know if the following process would work ok:
1. Image the current hard drive using clonezilla and remove the drive.
2. Install the RAID card with 4 hard drives of the same make and model as the current drive
3. Create the logical volume
4. Restore the image to that volume
Since I am restoring the image to a RAID volume, is that completely transparent to the OS? Or do I need to do a clean install on that volume and reconfigure everything?
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.