Hardware :: Dmraid + ICH10R RAID10: RAID Set Was Not Activated?
Jan 29, 2010
I'm trying to mount an existing SATA RAID 10 set on a Gigabyte EX58-DS4 (ICH10R "fake" RAID). dmraid is giving me the following:
Code:
root@CyberShadow:/home/vladimir# dmraid -b
/dev/sde: 1465149168 total, "3QK0782F"
[code]....
View 1 Replies
ADVERTISEMENT
Oct 11, 2010
I've googled my problem but I'm not sure I can find an answer in layman's terms. So here's my noob, simple question, please answer it in semi-noob-friendly terms I've been trying to install ubuntu for a while on my desktop pc. I gave it another go with 10.10 but I always have the same problem:
I've got two raid sets connected to an ich10r chip and they work fine in windows (2 samsung 1to + 2 raptors 75gb). Upon installation, dmraid only sets up the first raid set (Samsung array) but not the second one (Clean raptors intended for ubuntu). I don't have any other installation option, all my sata connectors are unavailable. So, is there a manual install solution? Can I force dmraid to mount the second raid set and not the first one? I think I read somewhere that this was a dmraid bug, but I can't find it anymore.
View 8 Replies
View Related
Apr 18, 2011
I recently installed openSUSE 11.4 at my desktop (Core i7 930, X58 chipset). All has gone really well, but I have a problem with the RAID 1 array in my system. I have created a RAID 1 array using the ICH10R controller, but in openSUSE I cannot access it. The array only contains one NTFS partition.
In Partitioner (in YaST) it shows the RAID array as "md126" and the partition as "md126p1". As mount point it shows "/windows/C *". It's the only storage device that is shows the mount point with an asterisk.
View 1 Replies
View Related
Aug 4, 2010
I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right? Hardware: Intel ICH10R First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.
[code].....
EDIT:I'm thinking in put LVM with the RAID 5. What are the advantages and disadvantages of it (LVM over RAID 5). It is safe? My MB (Asus P5Q) have Chipset Intel P45 ICH10R. What kind of RAID have it? Hardware RAID, fake RAID, BIOS RAID, software RAID? This are the specs of storage
[code].....
View 1 Replies
View Related
Aug 3, 2010
I have an intel server board s5000vsa and would like to install fedora 12 on intel embedded raid 5 (fake raid). Have already read some stuff about it and found megasr driver, which seems to be used in rhel 5. I also managed to compile this driver and load it into the running kernel of the other F12 installation. Afterwards, I created new initramfs by issuing: dracut --add-drivers megasr <file>
when this intramfs boots and raid is set in bios, fedora 12 boots from /dev/dm-1. however, the driver loaded is ahci and *not* megasr (therefore only raid 1 works) How can I create initramfs that will load megasr at startup and not ahci? Preferably with dracut....
[Code]....
View 1 Replies
View Related
Nov 27, 2010
I have two Ubuntu 8.04 servers currently. I've purchased some 500 GB hard drives and a RAID cage just yesterday and plan to change my current servers to using RAID1 with the hard drives. Much like everyone else, I have the nVidia RAID built onto the motherboard that I plan to use. There are many how-to's out there on how to setup RAID using dmraid during the install process - but what would a person do if they are simply changing over to a RAId system? I installed dmraid a few days ago on the server. It seems that even though I have RAID shut off in the BIOS, it saw the one and only drive as a mirrored array. When I rebooted today, the server would not start; it had ERROR: degraded disks.. bla bla. Then Grub tried to start the partition based off the UUID of the drive (which was not changed) and it said "device or resource busy" and would not boot.
This problem was corrected by going into the BIOS and turning on RAID. When I rebooted, the nVidia RAID firmware started and said degraded. I went in there and deleted the so-called mirror keeping the data intact. Rebooted, disabled the RAID feature in the BIOS and then the server loaded normally with a message "NO RAID DISKS" - but at lest it did boot! So this leads me to believe that trying to turn on a RAID-1 array (just mirror between two disks) may be a challenge since I already have the system installed. I will be using Partimage to make an image of the current hard drive and then restore it to one of the new drives.
The next question is - is it a requirement that dmraid is even installed? While I understand that the nVidia RAID is fake-raid - if I go into the nVidia RAID controller and setup the mirroring between the two disks before I even restore the files to the new disk, will both drives be mirrored? However, I do think that I'd probably have to have dmraid installed to check the integrity of the array. So, I'm just a bit lost on this subject. I definitely do not want to start this server over from scratch - so curious to know if anyone has any guidance on simply making a mirrored array after the install procedure, what kind of Grub changes will be needed (because of the previous resource is busy message), and whether installing dmraid is even a requirement.
View 9 Replies
View Related
Apr 22, 2011
I'm pretty new to FakeRAID vs JBOD in a software RAID, and could use some help/advice. I recently installed 11.4 on a brand new server system that I pieced together, using Intel RAID ICH10r on an ASUS P8P67 Evo board with 2500K Sandy Bridge CPU. I have two RAIDs setup, one RAID-1 mirror for the system drive and /home, and the other consists of four drives in RAID-5 for a /data mount.
Installing 11.4 seemed a bit problematic. I ran into this problem: [URL]... I magically got around it by installing from the live KDE version with all updates downloaded before the install. When prompted, I specified I would like to use mdadm (it asked me), however it proceeded to setup a dmraid. I suspect this is because I have fake raid enabled via the bios. Am I correct in this? Or should I still be able to use mdadm with bios raid setup?
Anyways, to make a long story short, I now have the server mostly running with dmraid installed vice mdadm. I have read many stories online that seem to indicate that dmraid is unreliable versus mdadm, especially when used with newer SATA drives like I happen to be using. Is it worth re-installing the OS with the drives in JBOD and then having mdadm configure a linux software raid? Are their massive implications one way or another on if I do or do not install mdadm or keep dmraid?
Finally, what could I use to monitor the health and status of a dmraid? mdadm seems to have it's own monitoring associated with it when I was glazing over the man pages.
View 9 Replies
View Related
Oct 10, 2010
About a year ago I bought a new compy and decided to get on-motherboard RAID, and by golly I was gonna use it, even if it wasn't worth it.
Well, after one year, two upgrades, a lot of random problems dealing with RAID support, and a lot of articles read, I have come to my senses.
The problem: I have a fakeraid using dmraid, RAID 1, two SATA harddrives. They are mirrors of eachother, including separate home and root partitions. I actually found the method I think I had used here: [URL]
The ideal solution: No need to reinstall, no need of another drive, no need to format.
My last resort: Buy a drive, copy my home directory, start from scratch, copy my stuff over.
PS: if it matters, I'm on 64 bit 10.04 LTS
View 9 Replies
View Related
Sep 9, 2009
I'm planning to setup a ubuntu file server. I'll be using the 8.04LTS server edition. the system is probably going to have 4 harddrives. at the end they shall form an software RAID10 system. I'd like to use lvm at some point in order to able to make snapshots as I read through some mdadm and lvm docu/tutorials I could think of two possible setups:
in both cases:
small raid1 of 2 partitions that will form /boot
small raid1 of 2 different partitions as swap space
1. the rest will form 2 large raid1, which will be combined to a single virtual drive via lvm
2. make a raid10 out of the rest with mdadm, then make a lvm volume group just consisting of the 1 virtual raid0 device are there pros/cons for either solution? is lvm as powerfull as mdadm in striping? will the first solution produce less overhead?
View 3 Replies
View Related
Nov 17, 2010
I have experienced a failure in one disk in my 4 disk software RAID10setup, but a straightforward rebuild is thwarted by the fact that twoof the other disks are considered non-fresh and hence get kicked outof the array. This of course prevents the array from starting up.Here's more detail about my setup:I have 4 2TB SATA disks in RAID10.
Device Boot Start End Blocks Id System
/dev/sda1 1 243201 1953512001 fd Linux raid autodetect
/dev/sdc1 1 243201 1953512001 fd Linux raid autodetect
[code]....
View 1 Replies
View Related
Feb 13, 2010
I am trying to install 11.2 on a system with RAID-1 /boot partition and RAID-10 /(root) partition. I have done this before with 10.3 but am hitting a wall with 11.2. In particular, the boot image loads and then halts when it tries to load the RAID-10 root partition saying that the personality is not laoded. modprobe shows that the raid10 module does not appear to be part of the boot image. I cannot find anything in the installation that will allow me to specify a module to include in the init image. How do I tell the installer to include an additional module?
View 3 Replies
View Related
Jun 17, 2010
I am trying to install ubuntu 10.04 to my raid 1+0 array. Install just sees four different hd:s
main-0
main-1
storage-0
storage-1
There is not showing any size or anything. I have set two raid arrays: main and storage, both with 4-hard-driver (raid10). Storage array is partitioned to 1TB and 400GB partitions, 400GB partition is where i want to install ubuntu. Main array has windows already. Where to go next?
root@ubuntu:~/src# dmraid -ay
RAID set "isw_beieefiehj_main" already active
RAID set "isw_beieefiehj_storage" already active
RAID set "isw_beieefiehj_main1" already active
RAID set "isw_beieefiehj_storage1" already active
root@ubuntu:~/src# parted_devices .....
View 2 Replies
View Related
Jul 19, 2009
Motherboard: Intel DG45ID
Intel said they support RHEL 5
[URL]
I create RAID10 and install CentOS 5.3 X86 version It show error on install screen, and detect 4 Hard Disk. It cannot detect one logical driver, Does ICH10R raid function is work on CentOS 5.3 ?
View 6 Replies
View Related
Jan 29, 2010
I haven't tried to install Ubuntu in a while due to some frustrations during install previously. If memory serves I couldn't install it on to my ICH10R Raid5 boot array and I didn't want to give that up. So now I am getting the Ubuntu itch again.
here is my hardware:
Asus Maximus II Formula mobo
4GB G. Skill Ram
Intel Q6600 Quad Core
Soundblaster X-Fi Fatal1ty sound card
[code]....
So I downloaded the 64bit iso and attempted to install and very quickly ran in to a problem. The install is recognizing the array that has the bootable volume on it, but wants to erase the whole partition and use it all. On my laptop, it was able to install and share the partition with Windows 7.
View 1 Replies
View Related
Jun 9, 2010
I've planed to install Ubuntu Lucid Lynx 10.04 x64 on my rig. dual boot setup windows 7 preinstalled Intel ICH10R RAID0 manual partition setup 200mb ext3 /boot 2gb swap 100gb ext4 / GUI installer can see my RAID and allows me to create this partitions manually. But when install begins i'm getting error :
Quote: The ext3 file system creation in partition #5 of Serial ATA RAID isw_dgaehbbiig_RAID_0 (stripe) failed
[Code]...
fdisk -l from Live cd shows me separated HDDs without RAID0 wrap (sda and sdb) Also in advanced section where i can configure boot loader default is /dev/sda, which is part of RAId_0. I'm checking isw_dgaehbbiig_RAID_0 as location for loader, assuming that this would be MBR. am i doing this step right?
View 4 Replies
View Related
Feb 1, 2011
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
View 4 Replies
View Related
Jul 15, 2010
I have two 150GB WD Raptors stripped in an INTEL ICH10R RAID0 array. Windows 7 is installed on it in a 100GB partition, there is a 150GB secondary partition, a 100MB system reserved partition created by Windows 7 and there is about 25GB unallocated space to install a linux distribution.My problem is the fake ICH10R RAID which does work only at the moment for me with Fedora Core 13 : Ubuntu 10.04 breaks it but i do not like FC13 very much. So i Googled and Googled for installation problems on this fakeraid and dmraid is involved there is a bug in it at least in Ubuntu 10.04 and openSUSE 11.2, with 11.3 my RAID is detected -> i have a MD RAID popup.
My question is : How to install properly 11.3 on the space left on my fake RAID array without breaking anything although i have a full backup of my system. I don't understand anything of the partitioning part of openSUSE in general. Do i must activate MD RAID, i tried once and my NTFS partitions were not displayed but RAID was detected with the correct size. I am totally lost with this installer.
View 8 Replies
View Related
Mar 12, 2011
I've read many of the postings on ICH10R and grub but none seem to give me the info I need. Here's the situation: I've got an existing server on which I was running my RAID1 pair boot/root drive on an LSI based RAID chip; however there are system design issues I won't bore you with that mean I need to shift this RAID pair to the fakeraid (which happens to most reliably come up sda, etc). So far I've been able to configure the fakeraid pair as 'Adaptec' and build the RAID1 mirror with new drives; it shows up just fine in the BIOS where I want it.
Using a pre-prepared 'rescue' disk with lots of space, I dd'd the partitions from the old RAID device; then I rewired things, rebooted, fired up dmraid -ay and got the /dev/mapper/ddf1_SYS device. Using cfdisk, I set up three extended partitions to match the ones on the old RAID; mounted them; loopback mounted the images of the old partitions; then used rsync -aHAX to dup the system and home to the new RAID1 partitions. I then edited the /etc/fstab to change the UUID's; likewise the grub/menu.list (This is an older system that does not have the horror that is grub2 installed) I've taken a look at the existing initrd and believe it is all set up to deal with dmraid at boot. So that leaves only the grub install. Paranoid that I am, I tried to deal with this:
dmraid -ay
mount /dev/mapper/ddf1_SYS5 /newsys
cd /newsys
[code]....
and I get messages about 'does not have any corresponding BIOS drive'. I tried editing grub/device.conf, tried --recheck and any thing else I could think of, to no avail. I have not tried dd'ing an mbr to sector 0 yet as I am not really sure whether that will kill info set up by the fakeraid in the BIOS. I might also add that the two constituent drives show up as /dev/sda and /dev/sdb and trying to use either of those directly results in the same error messages from grub. Obviously this sort of thing is in the category of 'kids don't try this at home', but I have more than once manually put a unix disk together one file at a time, so much of the magic is not new to me.
View 2 Replies
View Related
Sep 21, 2010
I have a Dell Inspiron 530 with onboard ICH9R RAID support. I have successfully used this with Fedora 9 in a striped configuration.Upon moving to Fedora 13 (fresh/clean install from scratch), I've noticed that it is no longer using dmraid. It now appears to be using mdadm. Additionally, I need to select load special drivers (or something close to that) during the install to have it find my array - which I've never had to do before with F9. While the install appears to work ok and then subsequently run, my array is reporting differences .. presumably because it is trying to manage what mdadm is also trying to manage. More importantly, I can no longer take a full image and successfully restore it as I could with the dmraid F9 install. Is there anyway to force F13 to use the dmraid it successfully used previously?
View 1 Replies
View Related
Mar 26, 2010
how do i install a linux distro that doesnt natively support Intel fakeraid, using dmraid and a livedisk. the raid is already setup, its just that backtrack cant find it because it doesnt have the right software.
View 2 Replies
View Related
Nov 13, 2010
1) Use INSSERV to start DMRAID
2) Use MKINITRD to create a new INITRD file which loads the DMRAID module.
Neither solution showed any detail of how to accomplish this, and which files to edit or what order I should use to tackle either. With the second solution, I have another SUSE 11.2 installation on another hard drive. Would it be ok to boot into that and create a new INITRD with DMRAID activated, or would it be better to break the RAID-set boot into one of the drives and create INITRD for that RAID 1 system, then recreate the RAID-set? The only issue I would see is that fstab, device mapper and grub would need the new pdc_xxxxxxxxxx value, which can be changed from the second installation.
My system is:
Asus M4A78T-E with fakeraid SB750 controller
AMD Black Edition - AMD Phenom II X4 3.4 GHz Processor
2 x Samsung 1TB drives
Suse 11.2
I have Windows XP on another hard drive purely for overclocking and syncing the 1TB drives, but I can't actually boot into the SUSE system until this issue is sorted out.
View 9 Replies
View Related
Apr 2, 2011
After uprading to 11.04 b1 i cannot activate my softraid. It still works if i boot 10.10 live CD
root@bisley:/home/goblin# dmraid -ay
RAID set "pdc_dhifadccdc" was not activated
root@bisley:/home/goblin# dmraid -s
*** Set
[code].....
View 9 Replies
View Related
Oct 7, 2010
I'm trying to get a GBB36X (Gigabyte-rebranded JMB36X) RAID0 controller to work in F13, with no success. When I run dmraid, here's what I get:
Quote:
# dmraid -ay -v -d
DEBUG: _find_set: searching jmicron_GRAID
DEBUG: _find_set: not found jmicron_GRAID
DEBUG: _find_set: searching jmicron_GRAID
[code].....
View 2 Replies
View Related
Jul 4, 2010
i have 2 partitions on dmraid. I am not able to configure them to mount with yast; yast partitioner gives an error stating that it can't mount a file system of unknown type. I am able to start the dmraid devices manually and mount them manually.
See bug:
https://bugzilla.novell.com/show_bug.cgi?id=619796 for more detailed info.
View 2 Replies
View Related
Jul 23, 2010
i ve run into problems while installing 11.3 x64. Installer stops at search for linux partitions... to solve it i ve had to go back to 11.2. Anyway, i have installed 11.3 on another hdd (3hdds in raid 5 had to be disconnected). When i go into partitoner (11.3) and device graph, i see two of three raid hdds ok with sda1, sda2.... but the third one is without any partitions./ if i do the same in 11.2 all three hdds are with all partitions in that graph. Anyone know the solution except not installing 11.3? .)
View 2 Replies
View Related
Dec 2, 2009
things "seem" to work, first time I've really ever used dmraid (usually mdraid), but I'm worried about this error
dmraid -ay
RAID set "jmicron_STORAGE2 " was activated
The dynamic shared library "libdmraid-events-jmicron.so" could not be loaded:
libdmraid-events-jmicron.so: cannot open shared object file: No such file or directory
Two things, why no matter what I name the RAID in the jmicron bios is puts a bagillion spaces after it, and second, I cannot find the missing lib anywhere I've installed all the dmraid* packages.
View 3 Replies
View Related
Jan 3, 2010
I have a truecrypt partition on an nvidia fakeraid raid0.ore there is no data recognizeable to ubuntu's kernel on startup and I think it is causing a big slowdown for me every time I boot up, which takes over ten minutes every time. Attached are bootchart results.What I would like to do is know how to remove dmraid from the startup process. I can't find any scripts in /etc/rc*.d and all of the information I can find on these forums is about how to enable dmraid on boot, not disable it.
Manually using dmraid takes nearly no time at all, which is what I did before this kernel upgrade since ubuntu didn't put dmraid in the startup process. I can't see why it takes dmraid ten minutes to detect raid devices now! The only clue I can see is that in the startup process with the splash screen disabled it talks about examining inodes on both drives in the array and then goes on to spit out stuff about raid 0, raid1, raid2, raid3, raid4 and on and on so it seems like it is loading raid arrays that don't exist and never have.If any bootchart gurus have any advice, please feel free to throw me a bone, I am in fear of restarting!edit: for some reason ubuntuforums shrank my images so that they are unreadable by anybody except those with a large projector.
View 1 Replies
View Related
Jan 24, 2011
I'm trying to rescue files from an Iomega NAS device that seems to be corrupted. This is the Storcenter rack-mount server - four 1tb drives, celeron, 1gb, etc. I'm hoping there's a live distro that would allow me to mount the RAID volume in order to determine if my files are accessible. Ubuntu 10.10 nearly got me there but reported "Not enough components available to start the RAID Array".
View 23 Replies
View Related
Oct 18, 2015
I eventually gave up and migrated to mdadm. Works just fine. Having upgraded to jessie and solved one problem
[URL] ....
I find the next one. When I boot into jessie my RAID device (just a data partition not /) is not found causing the boot to fail as per problems reported here
[URL] ....
After booting I can mount my RAID device but if it's in the fstab when booting it fails. Also, I notice that some of my lvm device names have changed. After a bit of hunting around I found a couple of solutions pointing to running dmraid as a service during boot and changing the entry for the RAID device in fstab to use the UUID.
[URL] .....
This seems to work. However this seems to be a workaround and as the lvm device paths for my / and /usr partitions have also changed, I'm wondering if there is a bug here as mentioned in the second link?
The / and /usr paths changed to /dev/dm-2 and /dev/dm-3 from the /dev/mapper/ form.
View 2 Replies
View Related
Mar 29, 2010
I am in one of those terrible situations...My motherboard died and I am left with a nvidia nforce raid stripe set that I need to get data off. I guess I should have setup that backup regime. Some search results have suggested that dmraid may be my white knight. I have pulled the data off each of the disks (3 of them) into image files (using ddrescue) and created loop devices (/dev/loop[1-3], using losetup).
I am running 32bit ubuntu 9.10 btw. When I do a "dmraid -ay" it tells me it found a raid set but has not activated it. When I do a "dmraid -r" it tells me I have a raid5 array on one of my /dev/sd? devices. dmraid seems to be ignoring my loop devices . Does anyone know if dmraid actually works with loop devices? If it does is there a way for me to point it directly at the devices and get it to do its auto-magic?
View 1 Replies
View Related