Ubuntu Installation :: 10.10 Install: Dmraid Doesn't See Second Disk Array On Ich10r?
Oct 11, 2010
I've googled my problem but I'm not sure I can find an answer in layman's terms. So here's my noob, simple question, please answer it in semi-noob-friendly terms I've been trying to install ubuntu for a while on my desktop pc. I gave it another go with 10.10 but I always have the same problem:
I've got two raid sets connected to an ich10r chip and they work fine in windows (2 samsung 1to + 2 raptors 75gb). Upon installation, dmraid only sets up the first raid set (Samsung array) but not the second one (Clean raptors intended for ubuntu). I don't have any other installation option, all my sata connectors are unavailable. So, is there a manual install solution? Can I force dmraid to mount the second raid set and not the first one? I think I read somewhere that this was a dmraid bug, but I can't find it anymore.
I've read many of the postings on ICH10R and grub but none seem to give me the info I need. Here's the situation: I've got an existing server on which I was running my RAID1 pair boot/root drive on an LSI based RAID chip; however there are system design issues I won't bore you with that mean I need to shift this RAID pair to the fakeraid (which happens to most reliably come up sda, etc). So far I've been able to configure the fakeraid pair as 'Adaptec' and build the RAID1 mirror with new drives; it shows up just fine in the BIOS where I want it.
Using a pre-prepared 'rescue' disk with lots of space, I dd'd the partitions from the old RAID device; then I rewired things, rebooted, fired up dmraid -ay and got the /dev/mapper/ddf1_SYS device. Using cfdisk, I set up three extended partitions to match the ones on the old RAID; mounted them; loopback mounted the images of the old partitions; then used rsync -aHAX to dup the system and home to the new RAID1 partitions. I then edited the /etc/fstab to change the UUID's; likewise the grub/menu.list (This is an older system that does not have the horror that is grub2 installed) I've taken a look at the existing initrd and believe it is all set up to deal with dmraid at boot. So that leaves only the grub install. Paranoid that I am, I tried to deal with this:
dmraid -ay mount /dev/mapper/ddf1_SYS5 /newsys cd /newsys
[code]....
and I get messages about 'does not have any corresponding BIOS drive'. I tried editing grub/device.conf, tried --recheck and any thing else I could think of, to no avail. I have not tried dd'ing an mbr to sector 0 yet as I am not really sure whether that will kill info set up by the fakeraid in the BIOS. I might also add that the two constituent drives show up as /dev/sda and /dev/sdb and trying to use either of those directly results in the same error messages from grub. Obviously this sort of thing is in the category of 'kids don't try this at home', but I have more than once manually put a unix disk together one file at a time, so much of the magic is not new to me.
I'm trying to rescue files from an Iomega NAS device that seems to be corrupted. This is the Storcenter rack-mount server - four 1tb drives, celeron, 1gb, etc. I'm hoping there's a live distro that would allow me to mount the RAID volume in order to determine if my files are accessible. Ubuntu 10.10 nearly got me there but reported "Not enough components available to start the RAID Array".
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b* cp -a -d -R -v -t /media/raid_array /d* cp -a -d -R -v -t /media/raid_array /e* cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right? Hardware: Intel ICH10R First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.
[code].....
EDIT:I'm thinking in put LVM with the RAID 5. What are the advantages and disadvantages of it (LVM over RAID 5). It is safe? My MB (Asus P5Q) have Chipset Intel P45 ICH10R. What kind of RAID have it? Hardware RAID, fake RAID, BIOS RAID, software RAID? This are the specs of storage
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).
Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?
When I try to install 10.4 on my hard drive, I get all the way to the "Prepare Partitions" menu and there are no disks listed and all button are grayed out. I am installing on an EVGA X58 motherboard with Intel ICH10 and I have AHCI enabled. Does Ubuntu support AHCI? Do I need drivers to install?
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
I'm not sure wether this is the correct forum for this, but this is the best place I can see at the moment, so I'll give it a try. Please redirect me if I'm mistaken.Running Suse 11.2, I have a RAID-5 device mounted, and a straigt disk. I want to copy data from the straigt disk to the array, using several methods: with Dolphin, with cpio. Copying runs for some time, sometimes one or some files are copied indeed, but after a short time (sometimes half a minute, sometimes 10 minutes or more) I get a
Message from syslogd@linux-wrth at Jan 9 22:44:03 ... kernel:[ 381.602651] Oops: 0000 [#1] PREEMPT SMP Message from syslogd@linux-wrth at Jan 9 22:44:03 ...
Motherboard: Intel DG45ID Intel said they support RHEL 5
[URL]
I create RAID10 and install CentOS 5.3 X86 version It show error on install screen, and detect 4 Hard Disk. It cannot detect one logical driver, Does ICH10R raid function is work on CentOS 5.3 ?
I received the following error when I got home from work today. If this was a windows environment, my first inclination would be to boot off my dvd and then run a chkdsk on the drive to flag any bad sectors that might exist. But there's a complication for me.
Code: Select allThis message was generated by the smartd daemon running on: host name: LinuxDesktop DNS domain: [Empty]
The following warning/error was logged by the smartd daemon: Device: /dev/sdc [SAT], 1 Currently unreadable (pending) sectors Device info: WDC WD5000AAKS-65V0A0, S/N:WD-WCAWF2422464, WWN:5-0014ee-157c5db9a, FW:05.01D05, 500 GB For details see host's SYSLOG.
You can also use the smartctl utility for further investigation.The original message about this issue was sent at Sun Feb 14 13:43:17 2016 MST.Another message will be sent in 24 hours if the problem persists.
From gnome-disks Code: Select allDisk is OK, 418 bad sectors (28° C / 82° F)
I did a bit of reading and it seems that most people suggest using badblocks to first get a list of badblocks from the drive and save it to a file. Then use e2fsck to then mark the blocks listed in the badblocks file as bad on the hard drive. My problem here is that this drive is part of a RAID5 array that hosts my OS. I wanted to confirm if this was still the correct process.I boot to my Live Debian disk, stop the raid array if it's active. Then run badblocks + e2fsck commands on the drive in question and then reboot.
how do i install a linux distro that doesnt natively support Intel fakeraid, using dmraid and a livedisk. the raid is already setup, its just that backtrack cant find it because it doesnt have the right software.
I've planed to install Ubuntu Lucid Lynx 10.04 x64 on my rig. dual boot setup windows 7 preinstalled Intel ICH10R RAID0 manual partition setup 200mb ext3 /boot 2gb swap 100gb ext4 / GUI installer can see my RAID and allows me to create this partitions manually. But when install begins i'm getting error :
Quote: The ext3 file system creation in partition #5 of Serial ATA RAID isw_dgaehbbiig_RAID_0 (stripe) failed
[Code]...
fdisk -l from Live cd shows me separated HDDs without RAID0 wrap (sda and sdb) Also in advanced section where i can configure boot loader default is /dev/sda, which is part of RAId_0. I'm checking isw_dgaehbbiig_RAID_0 as location for loader, assuming that this would be MBR. am i doing this step right?
I have two 150GB WD Raptors stripped in an INTEL ICH10R RAID0 array. Windows 7 is installed on it in a 100GB partition, there is a 150GB secondary partition, a 100MB system reserved partition created by Windows 7 and there is about 25GB unallocated space to install a linux distribution.My problem is the fake ICH10R RAID which does work only at the moment for me with Fedora Core 13 : Ubuntu 10.04 breaks it but i do not like FC13 very much. So i Googled and Googled for installation problems on this fakeraid and dmraid is involved there is a bug in it at least in Ubuntu 10.04 and openSUSE 11.2, with 11.3 my RAID is detected -> i have a MD RAID popup.
My question is : How to install properly 11.3 on the space left on my fake RAID array without breaking anything although i have a full backup of my system. I don't understand anything of the partitioning part of openSUSE in general. Do i must activate MD RAID, i tried once and my NTFS partitions were not displayed but RAID was detected with the correct size. I am totally lost with this installer.
I have two Ubuntu 8.04 servers currently. I've purchased some 500 GB hard drives and a RAID cage just yesterday and plan to change my current servers to using RAID1 with the hard drives. Much like everyone else, I have the nVidia RAID built onto the motherboard that I plan to use. There are many how-to's out there on how to setup RAID using dmraid during the install process - but what would a person do if they are simply changing over to a RAId system? I installed dmraid a few days ago on the server. It seems that even though I have RAID shut off in the BIOS, it saw the one and only drive as a mirrored array. When I rebooted today, the server would not start; it had ERROR: degraded disks.. bla bla. Then Grub tried to start the partition based off the UUID of the drive (which was not changed) and it said "device or resource busy" and would not boot.
This problem was corrected by going into the BIOS and turning on RAID. When I rebooted, the nVidia RAID firmware started and said degraded. I went in there and deleted the so-called mirror keeping the data intact. Rebooted, disabled the RAID feature in the BIOS and then the server loaded normally with a message "NO RAID DISKS" - but at lest it did boot! So this leads me to believe that trying to turn on a RAID-1 array (just mirror between two disks) may be a challenge since I already have the system installed. I will be using Partimage to make an image of the current hard drive and then restore it to one of the new drives.
The next question is - is it a requirement that dmraid is even installed? While I understand that the nVidia RAID is fake-raid - if I go into the nVidia RAID controller and setup the mirroring between the two disks before I even restore the files to the new disk, will both drives be mirrored? However, I do think that I'd probably have to have dmraid installed to check the integrity of the array. So, I'm just a bit lost on this subject. I definitely do not want to start this server over from scratch - so curious to know if anyone has any guidance on simply making a mirrored array after the install procedure, what kind of Grub changes will be needed (because of the previous resource is busy message), and whether installing dmraid is even a requirement.
2) Use MKINITRD to create a new INITRD file which loads the DMRAID module.
Neither solution showed any detail of how to accomplish this, and which files to edit or what order I should use to tackle either. With the second solution, I have another SUSE 11.2 installation on another hard drive. Would it be ok to boot into that and create a new INITRD with DMRAID activated, or would it be better to break the RAID-set boot into one of the drives and create INITRD for that RAID 1 system, then recreate the RAID-set? The only issue I would see is that fstab, device mapper and grub would need the new pdc_xxxxxxxxxx value, which can be changed from the second installation.
My system is:
Asus M4A78T-E with fakeraid SB750 controller AMD Black Edition - AMD Phenom II X4 3.4 GHz Processor 2 x Samsung 1TB drives Suse 11.2
I have Windows XP on another hard drive purely for overclocking and syncing the 1TB drives, but I can't actually boot into the SUSE system until this issue is sorted out.
I have a Dell Inspiron 530 with onboard ICH9R RAID support. I have successfully used this with Fedora 9 in a striped configuration.Upon moving to Fedora 13 (fresh/clean install from scratch), I've noticed that it is no longer using dmraid. It now appears to be using mdadm. Additionally, I need to select load special drivers (or something close to that) during the install to have it find my array - which I've never had to do before with F9. While the install appears to work ok and then subsequently run, my array is reporting differences .. presumably because it is trying to manage what mdadm is also trying to manage. More importantly, I can no longer take a full image and successfully restore it as I could with the dmraid F9 install. Is there anyway to force F13 to use the dmraid it successfully used previously?
i have 2 partitions on dmraid. I am not able to configure them to mount with yast; yast partitioner gives an error stating that it can't mount a file system of unknown type. I am able to start the dmraid devices manually and mount them manually.
See bug:
https://bugzilla.novell.com/show_bug.cgi?id=619796 for more detailed info.
i ve run into problems while installing 11.3 x64. Installer stops at search for linux partitions... to solve it i ve had to go back to 11.2. Anyway, i have installed 11.3 on another hdd (3hdds in raid 5 had to be disconnected). When i go into partitoner (11.3) and device graph, i see two of three raid hdds ok with sda1, sda2.... but the third one is without any partitions./ if i do the same in 11.2 all three hdds are with all partitions in that graph. Anyone know the solution except not installing 11.3? .)
About a year ago I bought a new compy and decided to get on-motherboard RAID, and by golly I was gonna use it, even if it wasn't worth it.
Well, after one year, two upgrades, a lot of random problems dealing with RAID support, and a lot of articles read, I have come to my senses.
The problem: I have a fakeraid using dmraid, RAID 1, two SATA harddrives. They are mirrors of eachother, including separate home and root partitions. I actually found the method I think I had used here: [URL]
The ideal solution: No need to reinstall, no need of another drive, no need to format.
My last resort: Buy a drive, copy my home directory, start from scratch, copy my stuff over.
I'm pretty new to FakeRAID vs JBOD in a software RAID, and could use some help/advice. I recently installed 11.4 on a brand new server system that I pieced together, using Intel RAID ICH10r on an ASUS P8P67 Evo board with 2500K Sandy Bridge CPU. I have two RAIDs setup, one RAID-1 mirror for the system drive and /home, and the other consists of four drives in RAID-5 for a /data mount.
Installing 11.4 seemed a bit problematic. I ran into this problem: [URL]... I magically got around it by installing from the live KDE version with all updates downloaded before the install. When prompted, I specified I would like to use mdadm (it asked me), however it proceeded to setup a dmraid. I suspect this is because I have fake raid enabled via the bios. Am I correct in this? Or should I still be able to use mdadm with bios raid setup?
Anyways, to make a long story short, I now have the server mostly running with dmraid installed vice mdadm. I have read many stories online that seem to indicate that dmraid is unreliable versus mdadm, especially when used with newer SATA drives like I happen to be using. Is it worth re-installing the OS with the drives in JBOD and then having mdadm configure a linux software raid? Are their massive implications one way or another on if I do or do not install mdadm or keep dmraid?
Finally, what could I use to monitor the health and status of a dmraid? mdadm seems to have it's own monitoring associated with it when I was glazing over the man pages.
I have trouble installing ubuntu on my desktop machine. I had Mint before, reinstalled Win7 and wanted to upgrade to newest release as I didn't use Linux for some time.
Basically fdisk can see the disk but installer don't
I found it is impossible to format partition during Ubuntu 10.04 installation. My storage configuration is as following. 1TB (500GB X 2) AHCI RAID 0 (it is said fake raid) and covers below 4 partitions.
/dev/mapper/pdc_dgbbagea1 9621688 5872752 3260168 65% / /dev/mapper/pdc_dgbbagea4 945587172 95673304 802259056 11% /home /dev/mapper/pdc_dgbbagea3 9698380 1363364 7846240 15% /opt Partition 2 is swap partition and root partition is ext3 original.
Since there is no enough space for upgrading, I try to format root partition and install a complete pure new OS. After I booting up system from Live or Alternative disk, I try to switch root partition from file system ext3 to ext4 and format it. However, Formating process always get failed after couple trying. Even I quit installation and use tools "Disk Utility" to check and adjust partition information. It reports device is busy.
I am trying to build a new array after adjusting TLER on my disks, which permanently changed some of the drives sizes. I am not sure if the following inconsistencies are related to the newly mismatched drive sizes.
Using:
Code: mdadm --create --auto=md --verbose --chunk=64 --level=5 --raid-devices=4 /dev/md1 /dev/sdd /dev/sde /dev/sdf /dev/sdg Nets me (build-time was two full days):
[Code]....
On a side note, since I'm recreating my array from scratch, I was wondering if anyone here knows of any optimized settings I could use. I've got 3Tb of data to transfer, so lots of test material.
These are Western Digital First Generation 2TB Green Drives (WD20EADS-00R6B0) with WDidle3 fix applied & TLER=ON. These are pre Advanced Format (aka not 4K).
Like it says in the title, I am thinking it should be this hard to install the RAID1 array in my brand new PC. Here is what is happening. I have two brand new 1TB drives that I am attempting a new, fresh install of 10.10 on (in fact, the entire box is new). I am attempting to use the alternate desktop install so that I can have access to the manual partitioning (which is required to setup RAID 1, correct?).
I tried to use the guide here: [URL]... I followed the steps, but when I got the the very end (after selecting and creating the MDs) I get an error message stating that there is no root file system defined. I went back and checked all the steps and I am sure I followed everything in the guide.
Here are some quirks (not sure if they are bugs or not) In step 5 of the disc partitioning, it says to select the bootable flag and set it to yes (I am assuming). I press enter over that options, the screen flashes really quickly to a progress bar, but then comes back to the options screen and it still says bootable flag is off. No matter how many times I do it is says "off".
Also, and here is the bigger problem I think. - So the guide says to select the free space in each drive and then select Automatically Partition the free space, which I do, and it comes back and looks formatted accordingly - has 975.6 GB ext4 / and 24.6 GB swap swap. No Problem there.
BUT - whenever I do the same thing to the second drive, the partitions on the first seem to disappear. Meaning, it doesn't say free space, and has two partitions listed, but the / and the swap (last items in each row) have moved to the second drive partitions. I am not sure if this is how it is supposed to be since the pictures in the linked guide to not show what it looks like after that. THis is driving me crazy and I have to have it set up in RAID 1 and unsure as to what it is I am missing.
I'm trying to switch to a new RAID5 array but can't get it to boot. My disks:/dev/sda: new RAID member
/dev/sdb: Windows disk /dev/sdc: new RAID member /dev/sdd: old disk, currently using /dev/sdd3 as /
The RAID array is /dev/md0, which is comprised of /dev/sda1 and /dev/sdc1. I have copied the contents of /dev/sdd3 to /dev/md0, and can mount /dev/md0 and chroot into it. I did this:
Code:
sudo mount /dev/md0 /mnt/raid sudo mount --bind /dev /mnt/raid/dev sudo mount --bind /proc /mnt/raid/proc
[code]....
This completes with no errors, and /boot/grub/grub.cfg looks correct[EDIT: No it doesn't. It has root='(md/0)' instead of root='(md0)']. For example, here's the first entry:
Code:
### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.35-25-generic' --class ubuntu --class gnu-linu x --class gnu --class os {