I'm new to Centos and very new to RAID/hd setup. I have a old HP proliant G3 ML150. I have no drivers cd or other, only the server. I have created a RAID1 array (named SYSTEM) with 2 HD of 250GB from the controller and have installed Centos 5.2 (updated after to 5.6). The installation is ok. Now I have added 2 HD of 1TB each and have created another RAID1 array (named DATI) from the controller. This RAID is to store data files. (And next I have to add another RAID1 for backup, but this is to do next week). how can I format and add it to Centos so I can use it?
I have just upgraded to Fedora 14 from an older version. I now have problems mounting my RAID1 array, which was operating correctly until now. This is a software RAID which was initially built under Fedora 10.The array is md0, and is made of 2 SATA drives (sdc and sdd) which have only one partition. The underlying filesystem is NTFS. The array is assembled correctly and active, as reported by /proc/mdstat and mdadm -D.When I try to mount the array, I get this:
Code: [root@Goofy ~]# mount /dev/md0 /mnt/raid mount: you must specify the filesystem type
I tried setting up my own partition table which apparently didn't go well.I have 1 compactflash-disk for linux and 2 hard drives for data which are set up for RAID1. But the RAID-drives doesn't get mounted.This is my first RAID-setup
Code: me@server:~$ df -h Filesystem Size Used Avail Use% Mounted on
I have bought a ICYBOX IB-NAS4220-B a while ago and kept getting issues with it (going down and not restarting, very slow etc). 2 weeks ago one more issue arose and I couldn't restart or reconnect to the box so decided to take the disks out and recover my data to a 5BIG Lacie. The IcyBox uses a software RAID1 and format drives in EXT3. Being a Linux system I thought I could easily recover data from an Ubuntu box so installed the latest version as CD boot wouldn't give me satisfactory results. I am now stuck with both 1TB drive plugged into my Ubuntu machine and can't seem to be able to mount the drives.
I have trawled through an extensive number of post on quite a few forums without even a step forward with this.
I have a fedora 13_x64 system with software raid1 for /boot and / (md0 and md1 respectively), swap is not raided.
I was doing an yum update through the software updater in gnome and the system froze.
I had to press reset to get any response from the machine.
Since then I have been getting the kernel panic above just after grub starts fedora.
I tried the previous kernel from the previous update and it has the same error.
At the worst I am prepared to load OS again but there is still some info and configs that I would like to access from the raid partitions before I go ahead.
Is there any way to access these partitions through a live CD or rescue environment?
Is there a method to bring this install back to life? or am I looking at a reinstall?
I have trawled through an extensive number of post on quite a few forums without even a step forward with this.
I have a fedora 13_x64 system with software raid1 for /boot and / (md0 and md1 respectively) , swap is not raided.
I was doing an yum update through the software updater in gnome and the system froze.
I had to press reset to get any response from the machine.
Since then I have been getting the kernel panic above just after grub starts fedora.
I tried the previous kernel from the previous update and it has the same error.
At the worst I am prepared to load OS again but there is still some info and configs that I would like to access from the raid partitions before I go ahead. Is there any way to access these partitions through a live CD or rescue environment?
Is there a method to bring this install back to life? or am I looking at a reinstall?
/dev/md0 (made from sda1 and sdb1) RAID1 /boot partition /dev/md1 (made from sda2, sdb2, and sdc2) RAID5 / partition
Earlier on I had some trouble with my sda drive, it dropped itself from both arrays, screwing up the mirroring of my two raid partitions participating in the /boot partition. I eventually got everything sorted out and back in sync. (I also have grub installed to MBR on both sda and sdb). Things are working fine regarding that, but since then I've had this issue:
During boot up, I'll get an error message that it could not mount my /boot partition (when fstab is set to either /dev/md0 or the UUID). It claims c9ab814c-47ea-492d-a3be-1eaa88d53477 does not exist!
My fstab:
Code:
[mark@mark-box ~]$ cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Jan 20 16:34:41 2010
[code]....
As far as I know, it isn't neccessary for /boot to be mounted always, correct? Although, as I understand, I need to have it mounted whenever making kernel changes correct?
I am using CentOS 5.2. I am installing from disc on a machine with Intel Embedded Server Raid Technology. It has two 500 GB SATA drives. During the initial boot process, it sees that these two devices exist. However, after getting into the screen to partition and configure RAID, it just shows this:
Drive /dev/mapper/ddf1_MegaSR R! #0 (475879 MB) (Model: Linux device-mapper)
I want to do a RAID1 so that the disks are mirrored. However, I would expect to see both drives listed. I can select RAID to create RAID partitions, but I think I need to be able to see both drives in order to do this correctly.
Today I made a samba-share out of /Video/Rorschach to easily put files in there from my windows7-machine (the plan is to steam from my CentOS-server to my HTPC which hasn't arrived yet).I started to put movies in there. It went just fine for a while but then I got this message:
[URL]
How is that even possible when df -h looks like this?:
[root@Rorschach Rorschach]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 446G 70G 354G 17% /
I have software RAID 1 on two physical discs. There are now 4 md -partitions (md0 ... md3), which are used in such as / and /home among others. Now current size of /home (md3) is starting to be full, and since / (md1) has more than plenty of free space I decided to fix the situation by shrinking / (md1) partition to free 40 Gigs of space and then growing /home (md3) partition for that 40 Gigs.
I already checked for some info using mdadm and got the following:
Now I would need some support on HOW exactly should I do this resizing since it is on RAID partitions.
Would it be good to use resize2fs to modify the filesystem sizes and mdadm to configure the partition sizes. Or could I perhaps get over this even easier by using GPartED (in case it supports my RAID)? Has anyone here done similar resizing on software RAID1 partitions?
The motherboard currently installed on my PC has a RAID Utility (Ctrl+I) at the startup that allow creating RAID1. But I already have a system installed with CentOS 5.4. In order to protect my data, I need RAID1. Can I add another Hard Drive now and have the data mirrored and synced onto both hard drives as if it was in RAID1 right from the beginning?
Not sure on what is going on here. The server is RAID1 through hardware RAID. It was running an unusual high load so I rebooted it. Now it won't boot up. I am getting these errors after the CentOS boot screen:sda: Current [descriptor]: sense key: Medium ErrorAdd.Sense: Address mark not found for data field
end_request: I/O error, dev sda, sector 3040555357 device-mapper: raid1: A read failure occurred on a mirror device. device-mapper: raid1: All sides of mirror have failed.
OS Version: CentOS 5.3 Motherboard: ASUS M3A78-CM (BIOS v.2003)
I have a single disk running the base OS and just installed 2 x Seagate 500GB SATA 3.0 drives in a RAID1 set that I would like to use for data storage. The OS sees the drives individually but not the RAID. Has anyone worked with a similar board and has any ideas what I need to do to get the OS to recognize the RAID1 array?
and I verify that /dev/md2 and /dev/md3 exist. But if I reboot the computer, these two devices are gone. I'm sure I am overlooking a step but I can't seem to find what it is. Could someone tell me what I need to do next?
I'm trying to load CentOS 5.5 on a new server with an Intel S5500BC motherboard using RAID 1. This board has a known problem with RHEL 5.x and the driver disk supplied has a fix. Here is the download for the driver [URL] Under the ESRT2_RHEL4-5_SLES9-1--11_v.13.21.2010_README file are directions in Section 3.1.3 on how to install the RHEL5x megasr driver. This works. The last thing replaces the ACHI driver with the megasr driver (paragraph 15) by loading the megasr.13.21.0614.2010-1-rhel50-u4-all.img in a temp file and then type "./replace_achi.sh". This step doesn't work and it is the critical one as it replaces the achi with megasr in the initrd image.
I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:
I've got a mailserver set up in a raid1 array.I shut down the system to install a CD-ROM drive but forgot to change the master/slave settings (I know, don'tt say anything) and didn't realize it before Centos started booting up, so it booted the hdc drive from the array.I rebuilt the array using mdadm without any apparant issues but on subsequent bootup, I get the following error :
There doesn't seem to be any side effects to this but since that didn't happen prior, I figure there's probably something I didn't do properly since I'm fairly new to the linux world.My raid array was originally set up by the Centos instal software and is set up like this :
hda1 + hdc1 = md0 (boot) Hda3 + hdc3 = md2 (/)
The other partitions are of the same size on each drive and are swap partitions.
PS : The drive is SMART capable and no errors appear during a self-test.
edit : Clonezilla also fails to boot properly although I don't know if its due to a software raid array in the first place or the errors in the filesystem. When only one drive was detected because of the jumpers, it booted properly.
I have installed a 2TB drive in my dual PIII 866 with 750MB ram. The drive is properly installed and I have configured the drive with 1 partition in RAID1. The array loads fine, but when I add the entry to mount the /dev/md2 /data/repository the following error occurs The filesystem size according to the superblock is 488378000 blocks The physical size of the device is 488377986 blocks Either the superblock or partition table is likely corrupt I have run fsck manually with no errors reported. I have removed the partition and rebuilt the array. The array assembles properly and I can manually mount the /dev/md2, but as soon as I add the entry to the fstab I get dropped to a shell after a reboot. Not sure where to go now?
I have 2 WD20EARS hard drives on the way (2 TB green WD disks with 4k sectors) and I'll be installing Centos 5.5 in RAID1 on them (2 partitions, one 16 GB / at the beginning and the rest in its own partition). I read the following thread: [URL]
and it seems that I might be having problems with the 4k sectors (Advanced Drive Format in WD lingo). I'm confused as to what exactly to do. I was thinking of downloading Fedora 14 Live CD and partitioning there and then switching to Centos 5.5 to install. Will that work? Seems I want the md 0.9 metadata because it doesn't have the space limit for me (2 TB) and it's stored at the end of the partition so it avoids alignment issues. Will I be able to make that happen with Fedora 14?
I've faced the problem with server freeze on heavy write.
System
CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results. CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz Memory: 3 x 2Gb DDR3. HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0
Yesterday I installed a new server with a large partition for my XEN images. This partition is a about 930GB. The installation tooks ages and after he finished I was finding out why that is. The SoftRAID1 I configured is rebuilding the large partition.
We have had a hardisk crash in our RAID1 webhosting server running CentOS5 and Plesk. We first realized something was wrong when our main site didn't load but showed MySQL errors. We then found out that the system was in read-only state. Something that also happened the day before yesterday, but we could fix with a FSCK. Then the system worked well til around 18 hours later when it crashed with the same sympoms. So, we rebooted the server and wanted to do a filesystem check again. But the HDD wouldnt even load. It was gone. Unfortunatelly it was not realized that the second disk in the system was also not working any more for some time now. Fortunatelly we had our main site backed up externally though. So we could re-install a fresh box and mounted the two drives to the system. We checked the harddisk. One is practically empty (the older one), the other has almost only files in 'lost + found' but these are all "numbered", no real filenames or so.
I'm trying to create new RAM image file to get my server load raid1 module upon start, I was following redhat documentation & it suggested to use the following command mkinited --with=raid1 inited-raid1-$(uname -r).img $(uname -r) However after running this command I'm getting this message No Kernel available for 'inited-2.6.18-128.el5"
So I didn't notice when I setup my CentOS 5.5 server that I left / as RAID 0 on md1. All the rest are RAID 1. Is there a way I can modify the array to RAID 1 without a risk of data loss? I'm glad I caught this before I setup any other services. I've only setup smb so far...
[root@ftpserver ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 16G 3.0G 13G 20% /
I created a encrypted volume on top of software raid1. These are my steps:
1. Create logical partition on sda
2. Create logical partition on sdb (same size)
3. Change type to partition to 'fd' for both partitions
4. Check that the both partitions are same size and type fdisk -l /dev/sda && fdisk -l /dev/sdb
5. partprobe
6. Make sure there are no remains from previous RAID installations on /dev/sdb by running: mdadm --zero-superblock /dev/sda6 mdadm --zero-superblock /dev/sdb6
Summary of issue: EXT4 filesystem won't mount--with error = mount: unknown filesystem type 'ext4'. Is no ext4 in kernel the issue? Or is something corrupted?Really perplexed by this. I updated Centos 5.5 to 5.6 to get ext4 (5.6 is supposed to have full support of ext4). I built several arrays and put the ext4 filesystem on them. All went well until I tried to mount them. BTW, this array (below) is set up as a RAID6 using partition 1 of #8 2TB drives.Bear with me here; just trying to be complete and not waste your time.
Attempting to mount give this:[root]# mount -v /dev/md1 /asc/array1mount: unknown filesystem type 'ext4'Note: it does "fake" mount with ption (which apparently does everything except the system call):[root]# mount -f -v /dev/md1/dev/md1 on /asc/array1 type ext4 (rw,grpquota,usrquote)e2fsprogs:Package e2fsprogs-1.39-23.el5_5.1.x86_64 already installed and latest version (for Centos 5.6; CentOS 6x uses the 1.41...)
We are running CentOS 5 and we are using two identical Iomega USB disks for back-up. We change disk every end of the week. The first Iomega disk was mounted as /media/Iomega, so all cron jobs referred to this mount. However, If we (safely) eject this disk and replace it by the second disk, this disk is mounted as /media/Iomega-1. How can i make that every disk change automatically mounts to /media/Iomega ?
I am trying to mount a nfs share which is located on my freebsd server to new centos 5.4 server but something wrong. If I try to mount nfs share on bash "mount -t nfs server: /mnt/fornfs /mnt/nfstemp/" It is working properly. But when I put a line in fstab like this "server:/mnt/fornfs /mnt/nfstemp nfs nfsrsize=8192, wsize=8192, timeo=14, intr, tcp 0 0" It do not mount the share. How can I mount it at startup?