Ubuntu Installation :: Dismantle A Manufacturer's RAID Set?
Jun 16, 2010
When I tried to install fedora13, f13's installer kept seeing my hard drives as a "BIOS RAID set (mirrorred)". Ubuntu10.04's installer had the same problem with my drives but that installer was less informative than f13's installer. U10.04's nstaller just stalled in the first screen, the one with the word Ubuntu above five dots, giving no hint as to what it didn't like. My pc came from Dell with 2 identical SATA hard drives in a RAID level one array. I changed CMOS settings from "RAID ON" to "ON" for each hard drive. That did not dismantle the RAID configuration, at least not n a way that satisfied f13's installer.
I reinstalled xp and tried to install f13 after a minimal xp installation. f13's installer detected "BIOS RAID metadata." What is it that f13's installer is detecting? I thought this might have something to do with nVidia's nForce4 Serial ATA RAID controllers. These are installed when you install the version xp that came with my system, not like most other drivers which you install after xp. I contacted nVidia but they couldn't help me with this. Well, it turns out to be Dell's fault. They place this "BIOS RAID metadata" in a special place on each hard drive of a RAID set. It survives even the formatting that accompanies a reinstallation of xp or any os.
If you want to truly dismantle a manufacturer's RAID set, you must use software like "dban" [URL] to thoughly wipe clean the drives. Download dban, burned it to a cd, then boot that cd. dban's auto??? command didn't work for me but its dod command did the trick. The process took about seven hours for each of my 160gb hard drives.
When I tried to install fedora13, f13's installer kept seeing my hard drivesas a "BIOS RAID set (mirrorred)".Ubuntu10.04's installer had the same problem withmy drives but that installer was less informative than f13's installer. U10.04's installerjust stalled in the first screen, the one with the word Ubuntu above five dots, giving no hintas to what it didn't like.My pc came from Dell with 2 identical SATA hard drives in a RAID level one array.I changed CMOS settings from "RAID ON" to "ON" for each hard drive. That did not dismantle the RAID configuration, at least notin a way that satisfied f13's installer.
I reinstalled xp and tried to install f13 after a minimal xp installation.f13's installer detected "BIOS RAID metadata."What is it that f13's installer is detecting?I thought this might have something to do with nVidia's nForce4 Serial ATA RAID controllers. These are installed when you install the version xp that came with my system, not like most other drivers which you install after xp.I contacted nVidia but theycouldn't help me with this.Well, it turns out to be Dell's fault. They place this "BIOS RAID metadata"in a special place on each hard drive of a RAID set. It survives even the formatting that accompanies a reinstallation of xp.
If you want to truly dismantle a manufacturer's RAID set, you must use software like "dban" (www.dban.org) to thoughly wipe clean the drives. Download dban, burned it to a cd, then boot that cd. dban's auto??? command didn't work for me but its dod command did the trick. The process took aout seven hours for each of my 160gb hard drives. Hey, i was real impressed by the help i got throughout all the twists and turns that took this problem far away from the original statement Special thanks to Scotty38and to the "Troy Polamalu looking" young man working at Best Buy here in Little Rock, he understands how manufacturer's are shipping their pcs.
I have seagate 40G IDE PATA hard-disk, 1.5G RAM and would like to extend another 40G hard-disk manufactured by Maxtor. I've Ubuntu 10.10 and Windows XP on different partitions.will it be OK if i extend hard disks by different manufacturer and are there any risk of disk failure probably in the future.
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
I'm writting an app and I need to get information about all devices in the computer, using the minimum of dependences. ok then ...Now I need to get information about memory, I already selected some information from:
Here is the problem, I need the model and manufacturer of the memory, but none of these is providing this information. dmidecode should, but the field came black.
Is there a linux/unix program that gives the following info?: (a) Installed hard disks manufacturers names. (b) Disk geometry (number of cylinders, heads and sectors per track).
I would like to send a specific option 15, domain-name, to a few clients on a network from a specific manufacturer. Usually all clients [URL] from the DHCP server, but when a client with mac address bellonging to manufaturer A asks for an IP address I would like to give them [URL] How would I go about doing this? Feels like it should be possible but I am not sure how.. I remember doing something similar in a microsoft DHCP server using vendor-identifier and passing out a vendow-specific option.
Quote:
class "xxx" { match if substring (hardware, 1, 3) = 00:00:10; option domain-name "yes.this.works.com"; } /Carl
I am trying to introduce a bar code scanner to Red Hat Ent 5 OS based Box, since the PC is brand new and manufacturer decided not install serial port any more instead I have 5 USB port 1.0v. Unfortunately the scanner device give output on serial port which is incompatible to computer's USB port.I try using converter USB to Serial and Serial to USB; but all efforts are avail and NO reading I can capture in to Linux box. I wonder if any one has configured Bar-code Scanner which has serial port while computer has USB port PLEASE enlighten me if you can
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I've just finished booting my system via Live CD, and installing 10.04-1 to existing partitions on a hardware RAID. The install went fine but when I rebooted I didn't get past the BIOS output screens.I used four existing partitions for the install: /home (MyRAID3, which was kept as-is), / (MyRAID2, which was reformatted) , /boot (MyRAID1, also reformatted) and swap.
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer! So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
Proc Core2Duo 6750 MB MSI P35 Neo 2 RAM Corsair 4GB Video Gigabyte GTS250 HDD 2x320GB Seagate in RAID 0 and 1GB WD
I have a Windows 7 installation with a boot partition on the RAID. I also want to have a dual boot with openSUSE 11.2 but I don't know how to set correctly my partitions. I have some unallocated space next to the Windows C: partition. When I try to install openSUSE it makes a suggestion to create some partitions that i don't need and don't want, and even doesn't mount them. It also creates a / 80GB, /boot 36MB, swap 2GB and /home 20GB partitions, so I am in lack of free space.
I don't know how to create screenshots during installation. Maybe I'll try to reinstall later and pick some screens in english, because my system language is bulgarian.
Is this possible? I was able to do this with Debian 6 no problem. The installation interface is really nice but seems to be lacking any way to do more advanced configurations. Is there some boot option I can pass in?
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I have two 1TB hard drives in a RAID 1 (mirroring) array. I would like to add a third 1TB drive and create a RAID 5 with the 3 drives for a 2TB system. I have ubuntu installed on a separate drive. Is it possible to convert my RAID 1 system to a RAID 5 without losing the data? Is there a better solution?
Just a few short months ago, I embarked on the journey to learn Linux. I have since installed it to multiple systems, got wine working with multiple programs, and replaced most of my need for windows with free alternatives. Software RAID in 9.10... I have setup a virtual machine in virtualbox. I created two 8GB virtual hard disks. I boot off 9.10 x64 CD. I launch the live CD and at the desktop I use built in software to setup the two virtual drives as RAID 0. After creating the RAID, I cannot successfully install Ubuntu. I configure three partitions on the RAID, one for filesystem, one for swap, and one for boot. But the install always fails.
configuring RAID, LVM and my physical volumes for a new 10.04 Server install (used mostly as a windows file server and print server). My hardware setup consists of 2 identical 500GB hard drives. My desired end state is:
An ext4 root partition (20 GB) A swap partition (2GB) A fat32 partition (450 GB) (to be accessed via Samba) The above all to be on RAID1 across the 2 disks
The way I see it, the there are a number of possible ways to configure the above, and I am looking for some advice on the best (feasible) option: Create a single MD0 raid volume accross the entire two disks, and then create a single LVG across this, with 3 sepreate LVs, on for each partition above Create 2 physical volumes on each disk, create two raid volume on these (MD0, MD1), one for the LVG with two LVs for the Root and swap, a the other for the FAT32 partition (this seems like more work?) Other more suitable options?
I am currently in the process of making a file server at home. My friend suggested that I do an LVM (more for learning purposes than anything) instead of a RAID. I have a RAID card with 5 HDD's attached (each one being a 250GB ATA HDD) to the computer. I am planning on using these for the server.
So, I recently installed Ubuntu on a (now) Triplebooting RAID 5 system. However, the setup was not able to install GRUB. This means I cannot boot into Ubuntu currently. The following are acceptable outcomes for me:
1) Installing GRUB as the primary bootloader, allowing me to boot into Linux, Windows 7, or Windows XP.
2) Installing GRUB as the primary bootloader, but allowing me to boot into the Windows 7 Bootloader as well as Ubuntu.
3) Installing GRUB as a secondary bootloader that can be accessed through the Windows 7 Bootloader.
My current config, according to gparted with kpartx installed is:
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
I currently am setting up an htpc running Karmic. The problem I am having is getting my Raid 0 to be mountable. My raid is not my boot partition, but is for data storage. My setup is a zotac motherboard with three sata connectors. I have a 300 GB drive, my eSata port, and my DVD attached to these. In the PCIe expansion slot I have installed Syba 2 port Sata PCIe 1a card using the Sil3132 sata II host chipset. Off of this I have 2 1.5Tib Hdds that I am setting up as Raid 0. During the boot I enter the chipset BIOS and establish this as a Raid 0.
When I install Karmic it sees the Raid and my 300 GB drive so I install to the 300 GB Drive and everything works fine. I am able to boot to the Hdd and run the OS. I then installed GParted and setup a partition on my RAID as GPT since I want one large partition of 2.78Tib. I then Format it through GParted as ext4 and I am able to mount and access it. I then reboot the system, and can no longer mount the filesystem. What I found interesting is If I reopen GParted I can then mount it. I traced it down to the fact that the until I access GParted the Block Special Device (sil_bgabagabaedd1) does not appear in /dev/mapper. Everytime I reboot I need to go into GParted to restore the Block Special Device then it is mounted. I think I am missing something in the raid setup as to why it is not being retained. What have I missed? What do I need to do to retain the Block Special Device? Is there a boot config setting?
Edit: I did further research and found that if I do kpartx it will appear just as gparted, but on reboot vanishes. I found something similar at this thread but not comfortable in updating dmraid: [URL] I think it is related to gpt and I will try to use a smaller partition to see if the behavior changes.
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error: fsck: fsck.isw_raid_member: not found fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I'm a long time windows user and it-tech but I have long felt that my geek-levels were too low so I installed Ubuntu last week (9.10 x64). Hopefully I can make it my primary OS. I have two 80GB drives in RAID-1 from my nforce raid controller, nforce 570 chipset. Then a 320 GB drive where I placed ubuntu and it's also where grub placed itself. And also a 1TB drive.
When grub tries to boot XP I get the error message: "error: invalid signature" I checked the forum as much as I could and tried a few things, but no change.
Drives sdc and sdd are the two drives in raid, they are matched exactly, but detected as different here. I really think they should be seen as one drive.
how I can make grub work as it should?
Also, if/when I need to make changes to grub, do I really have to use the live CD?
Code: ============================= Boot Info Summary: ============================== => Grub 1.97 is installed in the MBR of /dev/sda and looks on the same drive in partition #1 for /boot/grub. => Windows is installed in the MBR of /dev/sdb => Windows is installed in the MBR of /dev/sdc
This question is going to flag me as being a bit green to Ubuntu, but I must confess that in 15+ years in I.T., I have never had such a hard time understanding the partitioning scheme. Here's where I'm at.
I installed from the 9.10 live CD and selected the option to use the entire disk. The system has an Intel raid controller built in to the motherboard and two 80GB hard drives in a mirrored configuration. The system has previously been used with both XP and FreeBSD and never had an issue with partitioning or, more importantly, getting the boot manager to work.
So the live CD partitioned my hard drives, installed all the software and mount points, and claims that everything is finished. When I reboot, no boot device is found. If I then boot again from the live CD and select the option to boot from the hard disk, it does and I am in fact typing this message from the system. However, nothing I can do will make the thing boot without the bloody CD.
I've spent hours trying to figure out how to make grub work, or how to fix the MBR but no luck. The drives don't show up as /dev/hda or as anything logical that I can discern, so I can't even construct a workable install-grub command. Doing a df gives me this:
[Code]...
which is not very informative, is it? FreeBSD was never such a pain to make boot.Frankly, I'm not very impressed that a clean install (non dual boot) on such a standard hardware configuration could be so difficult to make the boot loader work. Documentation on this subject is voluminous but very shabby. I have searched and searched and I cannot find any mention of hardware mirrored IDE or SATA drives, nor what dev they would show up as. Very frustrating. Every tutorial I've read on installing grub2 or grub just doesn't work, usually because the dev is not right.
Can anyone shed some light on this bizarre behavior and perhaps offer some advice that will allow me to boot this system without the use of a live CD?
I already have a 300 GB SATA drive with Ubuntu 8.04 installed on it. It is currently running off my mobo's onboard SATA 1.0 Raid Controller. I recently purchased a SATA 2.0 Raid PCI controller that I will be putting in the computer and 2 new 750 GB Western Digital Caviar Green Hard drives. I wish to add the two drives in a Raid 1 configuration to store all my Pictures, Files, and Movies on. Every instruction and tutorial I can find on setting up Raid on Linux assumes you are performing a fresh install on Linux and gives no tips or instructions for current installations. I already have Ubuntu installed and do not wish to have to reinstall it. I want to leave my installation on the 300 GB drive and just add in the 2 750GB drives for storage.
I have run Ubuntu in the past and then switched to OpenSUSE several months ago and set up raid 0 on a 500gb hard drive and 700gb hard drive (I went with openSUSE because of the graphical raid setup.)
My whole partition setup looks like this:
500gb Hard Drive:
750gb Hard Drive:
md0 is the two 400gb partitions on each drive for a total of 800gb space on my /home partition ext4 filesystem ( 380gb space used ) md1 is 100gb ext4 / partition.
all raid 0
Now I was wondering if I downloaded the alternate install cd for ubuntu ( as OpenSUSE has crashed for the second time because of bad updates ( starts, but gets to terminal only ) ) would I be able to keep my raid 0 home partition and wipe the rest of the each drive and setting up Ubuntu keeping all of my files and settings intact, just to install my programs I need all while keeping my old settings ( such as firefox bookmarks, virtual box utilities etc. ) intact.
From what I know it's possible, but I don't know much about the Ubuntu Alternate install disk ( as I have been dealing with dependancy hell on OpenSUSE ) but in OpenSUSE it wont let me keep the old raid setup ( md0 ) Im guessing it is possible to set up the home directory on a different hard drive and then going back into the live cd, editing the fstab, and switching it to md0, if this is even possible, or would I need to configure the driver on that system before I did that Oh and I forgot to mention that I've only been running 64bit operating systems.
System Specs: AMD Dual core at 2.8ghz ( overclocked, stable, cpu ran at full bore for a day. only reaching 120f) Nvidia 9600 gso 368mb ram, 4gb ram at 800mhz