CentOS 5 :: /dev/mapper And 5 Installation With RAID1?
Sep 28, 2009
I am using CentOS 5.2. I am installing from disc on a machine with Intel Embedded Server Raid Technology. It has two 500 GB SATA drives. During the initial boot process, it sees that these two devices exist. However, after getting into the screen to partition and configure RAID, it just shows this:
Drive /dev/mapper/ddf1_MegaSR R! #0 (475879 MB) (Model: Linux device-mapper)
I want to do a RAID1 so that the disks are mirrored. However, I would expect to see both drives listed. I can select RAID to create RAID partitions, but I think I need to be able to see both drives in order to do this correctly.
I am trying to setup a H/W RAID-1 matrix but I am unsuccessful. I am trying to get partitions installed as /dev/md0, /dev/md1 but it keeps going for /dev/mapper/isw...The reason is that I have R1Soft backup and it needs to hook the partitions as seen in /proc/partitions from /dev not /dev/mapper/isw. I have tried to boot the installation with various options but nothing!
I am trying to prevent the dm_mod and friends from loading during install.I am installing by PXE boot with NFS and kickstart files over http. This works fine for other machines.Ok, I have a Dell PowerEdge 2900 III with 8gig ram, PERC 6/i raid controller.The raid has 8 1TB drives in it, so the total size of the device is >2TB.There is also an SATA drive, which is NOT on the raid controller.
The problem is, I want to install CentOS on the SATA drive. The SATA drive comes up as a device-mapper drive, with a big crazy device name.This is ok, but the problem is, when the system goes to boot, it just says 'missing operating system'.I can't boot from the raid because it is large enough to require a GPT partition, and CentOS says it can't boot from a GPT partition.The SATA drive would work FINE if I could just prevent the damn device mapper from loading, so that I could install on the SATA drive in ordinary SATA mode.
I have tried re-squashing the stage2.img with /etc/modprobe.conf with alias dm_mod off and such, no luck.I also tried adding an /etc/modprobe.d/blacklist file, that did not work either.I also tried putting blacklist= dm_mod in the kickstart file, but that seems to be a fedora-only option and it kills my install. any ideas on preventing the dm_mod et. al. from loading at install time??
Trying to do a yum update to get everything to latest, towards the end it says this:
[Code]...
how to get around this? I tried yum clean all, then yum update, again but it did the same. I had other deps missing on other servers but yum clean all fixed them -- can't find anyone else who's had this specific issue either, nor an rpm called 'device-mapper-event' or the other things that are missing - am kinda stuck!
I installed NFS and portmap for export a folder to another PC. /usr/local. ftp is server's hostname and ws01 is client's hostname. I edited file /etc/exports with next text: /usr/local ws01(rw,root_squash) *(ro)
I restarted service portmap and nfs. From client, I try check connection with server with command: showmount -e ftp and result is: mount clntudp_create: RPC: Port mapper failure - RPC: Unable to receive
I run many 5.2 virtual machines inside of VMWare ESX3.5 I've updated a couple of test VMs from 5.2 to 5.3. The thing that stands out is that during boot time, the sequence gets to this stage:
device-mapper: dm-raid45: intialized v0.2429 Waiting for driver initialization.
Here it takes about 3 times longer than the previous Centos 5.2 (about 9 seconds instead of 3). Whereas inside a physical box the wait in 5.3 is the same as it was in 5.2
Today I made a samba-share out of /Video/Rorschach to easily put files in there from my windows7-machine (the plan is to steam from my CentOS-server to my HTPC which hasn't arrived yet).I started to put movies in there. It went just fine for a while but then I got this message:
[URL]
How is that even possible when df -h looks like this?:
[root@Rorschach Rorschach]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 446G 70G 354G 17% /
I'm new to Centos and very new to RAID/hd setup. I have a old HP proliant G3 ML150. I have no drivers cd or other, only the server. I have created a RAID1 array (named SYSTEM) with 2 HD of 250GB from the controller and have installed Centos 5.2 (updated after to 5.6). The installation is ok. Now I have added 2 HD of 1TB each and have created another RAID1 array (named DATI) from the controller. This RAID is to store data files. (And next I have to add another RAID1 for backup, but this is to do next week). how can I format and add it to Centos so I can use it?
I have software RAID 1 on two physical discs. There are now 4 md -partitions (md0 ... md3), which are used in such as / and /home among others. Now current size of /home (md3) is starting to be full, and since / (md1) has more than plenty of free space I decided to fix the situation by shrinking / (md1) partition to free 40 Gigs of space and then growing /home (md3) partition for that 40 Gigs.
I already checked for some info using mdadm and got the following:
Now I would need some support on HOW exactly should I do this resizing since it is on RAID partitions.
Would it be good to use resize2fs to modify the filesystem sizes and mdadm to configure the partition sizes. Or could I perhaps get over this even easier by using GPartED (in case it supports my RAID)? Has anyone here done similar resizing on software RAID1 partitions?
The motherboard currently installed on my PC has a RAID Utility (Ctrl+I) at the startup that allow creating RAID1. But I already have a system installed with CentOS 5.4. In order to protect my data, I need RAID1. Can I add another Hard Drive now and have the data mirrored and synced onto both hard drives as if it was in RAID1 right from the beginning?
Not sure on what is going on here. The server is RAID1 through hardware RAID. It was running an unusual high load so I rebooted it. Now it won't boot up. I am getting these errors after the CentOS boot screen:sda: Current [descriptor]: sense key: Medium ErrorAdd.Sense: Address mark not found for data field
end_request: I/O error, dev sda, sector 3040555357 device-mapper: raid1: A read failure occurred on a mirror device. device-mapper: raid1: All sides of mirror have failed.
OS Version: CentOS 5.3 Motherboard: ASUS M3A78-CM (BIOS v.2003)
I have a single disk running the base OS and just installed 2 x Seagate 500GB SATA 3.0 drives in a RAID1 set that I would like to use for data storage. The OS sees the drives individually but not the RAID. Has anyone worked with a similar board and has any ideas what I need to do to get the OS to recognize the RAID1 array?
and I verify that /dev/md2 and /dev/md3 exist. But if I reboot the computer, these two devices are gone. I'm sure I am overlooking a step but I can't seem to find what it is. Could someone tell me what I need to do next?
I'm trying to load CentOS 5.5 on a new server with an Intel S5500BC motherboard using RAID 1. This board has a known problem with RHEL 5.x and the driver disk supplied has a fix. Here is the download for the driver [URL] Under the ESRT2_RHEL4-5_SLES9-1--11_v.13.21.2010_README file are directions in Section 3.1.3 on how to install the RHEL5x megasr driver. This works. The last thing replaces the ACHI driver with the megasr driver (paragraph 15) by loading the megasr.13.21.0614.2010-1-rhel50-u4-all.img in a temp file and then type "./replace_achi.sh". This step doesn't work and it is the critical one as it replaces the achi with megasr in the initrd image.
I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:
I've got a mailserver set up in a raid1 array.I shut down the system to install a CD-ROM drive but forgot to change the master/slave settings (I know, don'tt say anything) and didn't realize it before Centos started booting up, so it booted the hdc drive from the array.I rebuilt the array using mdadm without any apparant issues but on subsequent bootup, I get the following error :
There doesn't seem to be any side effects to this but since that didn't happen prior, I figure there's probably something I didn't do properly since I'm fairly new to the linux world.My raid array was originally set up by the Centos instal software and is set up like this :
hda1 + hdc1 = md0 (boot) Hda3 + hdc3 = md2 (/)
The other partitions are of the same size on each drive and are swap partitions.
PS : The drive is SMART capable and no errors appear during a self-test.
edit : Clonezilla also fails to boot properly although I don't know if its due to a software raid array in the first place or the errors in the filesystem. When only one drive was detected because of the jumpers, it booted properly.
I have installed a 2TB drive in my dual PIII 866 with 750MB ram. The drive is properly installed and I have configured the drive with 1 partition in RAID1. The array loads fine, but when I add the entry to mount the /dev/md2 /data/repository the following error occurs The filesystem size according to the superblock is 488378000 blocks The physical size of the device is 488377986 blocks Either the superblock or partition table is likely corrupt I have run fsck manually with no errors reported. I have removed the partition and rebuilt the array. The array assembles properly and I can manually mount the /dev/md2, but as soon as I add the entry to the fstab I get dropped to a shell after a reboot. Not sure where to go now?
I have 2 WD20EARS hard drives on the way (2 TB green WD disks with 4k sectors) and I'll be installing Centos 5.5 in RAID1 on them (2 partitions, one 16 GB / at the beginning and the rest in its own partition). I read the following thread: [URL]
and it seems that I might be having problems with the 4k sectors (Advanced Drive Format in WD lingo). I'm confused as to what exactly to do. I was thinking of downloading Fedora 14 Live CD and partitioning there and then switching to Centos 5.5 to install. Will that work? Seems I want the md 0.9 metadata because it doesn't have the space limit for me (2 TB) and it's stored at the end of the partition so it avoids alignment issues. Will I be able to make that happen with Fedora 14?
I've faced the problem with server freeze on heavy write.
System
CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results. CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz Memory: 3 x 2Gb DDR3. HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0
Yesterday I installed a new server with a large partition for my XEN images. This partition is a about 930GB. The installation tooks ages and after he finished I was finding out why that is. The SoftRAID1 I configured is rebuilding the large partition.
We have had a hardisk crash in our RAID1 webhosting server running CentOS5 and Plesk. We first realized something was wrong when our main site didn't load but showed MySQL errors. We then found out that the system was in read-only state. Something that also happened the day before yesterday, but we could fix with a FSCK. Then the system worked well til around 18 hours later when it crashed with the same sympoms. So, we rebooted the server and wanted to do a filesystem check again. But the HDD wouldnt even load. It was gone. Unfortunatelly it was not realized that the second disk in the system was also not working any more for some time now. Fortunatelly we had our main site backed up externally though. So we could re-install a fresh box and mounted the two drives to the system. We checked the harddisk. One is practically empty (the older one), the other has almost only files in 'lost + found' but these are all "numbered", no real filenames or so.
I just tried to install with Ubuntu 10.04 AMD64 Alternate on RAID1 and Encryption but after reboot the screen just stays black.
my system is a AMD Athlon 64 X2 Dual Core 4200+ on a Abit AN-M2HD Motherboard, and 2 HDs each 250.1 GB i split the HD into * 50GB for / * 200GB for /home * 1GB for swap all get a RAID1 /home is encrypted with passphrase (Twofish 256, cbc-essiv:sha256) swap is encrypted with random (Blowfish 128, cbc-essiv:sha256)
where can i check RAID and hardware compatibility?
I just made a complete reinstall of my fileserver. After the installation the system cannot autostart my Raid1. For some reason it seams that only one of the identical disks are found by mdadm"disk utility" states "not running, partially assembled"If i immediately stop the raid in the disk utility I can restart it and mount it.some diagnosticsmdadm.confQuote:
# by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions
I'm trying to create new RAM image file to get my server load raid1 module upon start, I was following redhat documentation & it suggested to use the following command mkinited --with=raid1 inited-raid1-$(uname -r).img $(uname -r) However after running this command I'm getting this message No Kernel available for 'inited-2.6.18-128.el5"
I fail to install OpenSuse 11.3 with Raid-1. What I do: Partitioning -> Expert. sda1 type RAID do not format / mount, sda2 /(root) type RAID do not format / mount, sda3 /home type RAID do not format/mount. Clone disk to sdb. Raid -> add md126p1 type swap mount to swap, add->md126p2 type etx4 mount to /(root), md126p3 type ext4 mount to /home. Bootloader: GRUB, boot from MBR enabled, boot from / disabled. After installation the system does not boot and grub reports error that the specified filesystem can not be found.
I recently set up an old desktop computer on my home network for use as a file server. I installed Ubuntu 10.04 Server on it and I'm slowly learning how to make it do what I want (I'm used to the GUI, so this is a bit of a jump for me).My plans are to put two 1.5TB hard drives in the computer (there's already a 160GB with the OS installed) and put them in a RAID1 configuration. This, as I understand it, will write all data to both drives, creating two identical drives. I'll only have 1.5TB total space for backup, but if one of the hard drives die, I can replace it without losing any data.
Hopefully I understand all that correctly. Now, my questions are how to actually set a RAID configuration up. I don't have a controller or anything, assuming I could do it off the two SATA ports in the motherboard (the 160GB is IDE) in software RAID.Once I have the two hard drives set up in RAID1, how do I allow the house to access the file storage?We're all on Windows 7 computers.If one of the hard drives fail, how will I know? Can I set it up so I receive some sort of notification? Also, how does the replacement process work; can I just pop the broken drive out and put a similar one in? Do I need to readjust any settings after replacing one?
I just installed Ubuntu server edition to my computer (brand new, no OS) and finished installation. In the terminal I used apt-get ubuntu-desktop to install a desktop interface.In my rig, I have two 500GB HDDs. I set them up through my computer BIOS as RAID1 drives, yet as I understand I still need to configure the Ubuntu software raid for it to work correctly. Unfortunately, I already partitioned my drives! I used the easy way (guided with LVM or whatever) and let it do it for me. Now, RAID1 is very important to me! Is there anyway to repartition the disks to use RAID1, or do I need to wipe my computer and reinstall Ubuntu?
So I didn't notice when I setup my CentOS 5.5 server that I left / as RAID 0 on md1. All the rest are RAID 1. Is there a way I can modify the array to RAID 1 without a risk of data loss? I'm glad I caught this before I setup any other services. I've only setup smb so far...
[root@ftpserver ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 16G 3.0G 13G 20% /
2xHDD 300GB RAID1 - OS part 2xHDD 1TB RAID1 - data part. Both arrays were created inside the bios-like setup.
During the installation of fedora I "checkboxed" only the 300GB OS raid and used it for the installation. Everything is ok with it. After installation finished and I rebooted, I tried to initialize my second RAID1, that's 1TB - I was able, it was OK, I created 1 max-size filesystem there and put some data in it. After reboot this filesystem didn't want to mount for some reason - so I commented it in fstab and to my surprise, when I reached linux - the partition for it doesn't exist!!! Here is some data:
My guess would be that I am not fully and properly initializing the second array, (it's marked as auto read only in cat /proc/mdstat) but I'm not sure how to proceed.
I have built a small test server. I am planing on using this machine a an email and web server to test out its hosting capacities. in the future I will build a larger and more well equipped version.
AMD Athlon x2 2.0ghz 2 160gb SATA drives (hardware raid 1, done through the Motherboard) 2 gb ram (dual channel)
Like I said small test server. I am trying to install 10.04 server edition. When I get to the point of partitioning it asks me to activate the raid so I do. I get through the guided partitioning and get ready to write the file system to the drives and the screen goes red and says that it has failed. On a side note, this works if i install it on the same drives without any raid configuration.