Ubuntu Installation :: RAID Array Will Not Boot (x64)?
Sep 15, 2010
I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters
View 6 Replies
ADVERTISEMENT
Dec 22, 2010
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing
View 4 Replies
View Related
May 28, 2011
I've recently had trouble reinstalling my Ubuntu system as I was getting various unusual errors as described in my old thread here. I thought it was probably something to do with my RAID-0 array which was pre-installed on my laptop from purchase being corrupted or something like that (if it's possible). I decided to simplify things for myself (not understanding RAID arrays much) so I just removed the RAID array and installed Windows and Ubuntu on the now separate hard disks. It worked fine.
I noticed quite a significant performance drop, however, with even Ubuntu boots taking longer than 30 seconds despite my laptop being both high-spec and only a few months old. Windows, as you can imagine, was dreadfully slow. I wasn't entirely convinced that this was entirely due to the loss of the RAID array - as even low-spec laptops with presumably no RAID arrays are supposed to boot Ubuntu in under 30 seconds apparently - but I read that RAID-0 arra
View 8 Replies
View Related
Jan 18, 2010
I have an issue with a RAID array failing on boot. Seems like an issue with the file system. I get passed the RAID BIOS (and from what I can see, it looks alright there, all devices appear), but then the following error messages appear:
Code:
raid5: failed to run raid set md0
mdadm: failed to RUN_ARRAY /dev/md0 input/output error
mdadm: Not enough devices to start the array
and further down:
Code:
fsck.ext3: Invalid argument when trying to open /dev/md0
/dev/md0:
The superblock could not be read or does not describe a correct ext2
[code]....
I then login with root password to get a "Repair file system" prompt. Tried with fsck, but not working. It's 4x1TB in RAID5 on HighPoint RocketRAID 2300 4P SATA II/300, with Fedora 9. Not sure what other system info might be needed.
View 5 Replies
View Related
Aug 16, 2010
I have a raid array level 5 with metadata 1.2 made with mdadm. I put it on /etc/fstab to mount it on boot but it doesn't works because the raid is not detected on boot. I have a /etc/mdadm.conf like this:
Code:
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=0 UUID=afdfe00e:0d18a5eb:29aa54f9:8b422ee0
Just another thing... After the command
Code:
mdadm --detail --scan >> /etc/mdadm.conf
The mdadm.conf is like this:
Code:
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.02 name=0 UUID=afdfe00e:0d18a5eb:29aa54f9:8b422ee0
But I change manually the metadata version because the 1.02 give me a error. I don't know if it is a bug or what! Beside this. I have to put a line in /etc/rc.d/rc.local to assemble the array.
Code:
mdadm --assemble --scan --uuid=afdfe00e:0d18a5eb:29aa54f9:8b422ee0
And after that I already can mount it. Why the array is not detected on boot? Is because metadata type is prior to 1.00? Can I put the line I have on /etc/rc.d/rc.local to assemble the array in another file, that will be executed before /etc/fstab?
View 5 Replies
View Related
May 9, 2010
I'm running 64bit Lucid. I've recently had a severe problem with my softraid (5) array, and have had to recreate the array to fix it. However this now means that something is up with GRUB/initramfs, and booting times out while waiting for the root device (md0) to be ready. /boot is on a normal partition, not the raid array itself. A friend of mine has rebuilt my initramfs file with the new UUID, but now I get the message: 'Kernel panic not syncing: VFS: unable to mount root fs on unknown-block (9,0)'.So my question is either how do I sort this error, OR how do I rebuild initramfs/grub in a way that will boot?
View 6 Replies
View Related
Feb 20, 2011
I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)
I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan
The new lines in this file are:
ARRAY /dev/md9 level=raid1 num-devices=2 UUID=470fb8a6:45561fe0:ebda4a02:9ba7a1ed
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=f351fbba:c704a4b2:ebda4a02:9ba7a1ed
ARRAY /dev/md8 level=raid1 num-devices=2 UUID=c6ccec17:2274588e:ebda4a02:9ba7a1ed
To check that the mdadm.conf is fine I have stopped the new arrays:
[Code].....
View 7 Replies
View Related
Sep 27, 2010
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB
Am I missing something or making a false assumption somewhere?
View 4 Replies
View Related
Aug 31, 2010
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
[Code]....
View 11 Replies
View Related
Jul 11, 2010
After upgrading my ubuntu install my raid array is gone. The drives appear in blkid as "Linux raid member" and both have the same uuid. If I try to mount the drive via fstab I get a message that the drive is not ready or present. If I try to mount each of the two drives, one mounts successfully the other reports serious errors. Issuing a cat /proc/mdstat shows md_d0 as inactive.How can I re-establish my raid array? I have the data backed up so if I have to wipe out the disks to start over that's an option.
View 2 Replies
View Related
Aug 6, 2010
I currently have a nice HTPC setup that has been upgraded from distribution to distribution since 8.xx all the way up to 9.10 now. I just moved to a new place and it feels like the right time to do a fresh install of 10.04 into the HTPC. The problem is that I have a RAID 5 array in the system that has all my pictures, videos, music, etc. This OS is installed in a separate drive that is not part of the RAID array (I have 4 drives in the system, 3 in the array, 1 for the OS). what is the general process I should follow to do:
1. a fresh install of 10.04
2. do #1 while at the same time not losing my array (don't think I would anyway).
3. what to do after install to get the array back up and running and mounted.
View 4 Replies
View Related
Nov 27, 2010
repairing the MBR on my raid array. I have three disks, each with three paritions:root (sda1 sdb1 sdc1) 59GB swap (sda2 sdb2 sdc2) 1.12GB grub/boot (sda3 sdb3 sdc3) 298MB I have been able to get this running and it has been working fine for several months. A few days ago, I installed 10.04 to a USB stick but did not disable the hard drives at that point and so the MBR was overwritten. If I leave the USB stick in, it boots fine from that stick. However now I can't get the boot from the raid array to work correctly. I can do the following:Load 10.04 from the Live CD install mdadm recreate the root partition using
Code:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1
I can mount and view the files on md0 with no problems. It's not corrupted in any way. When I installed, I followed the directions to make each of the grub drives bootable. However I don't know for sure whether grub was installed on each partition separately or if it was installed on the assembled partition only. I have tried using
Code:
sudo grub-install /dev/sda3
and got warnings, something to the effect
Code:
Cannot find a device for /boot/grub
no path or device specified
Auto-detection of a filesystem module failed
specify the module with option '--module' explicitly
I have also been able to get to the grub rescue prompt but my keyboard (wireless USB) is not recognized and so I can't type anything in at that point.
View 8 Replies
View Related
Feb 8, 2011
i have a fedora 11 server which can't access the ext4 partitions on lvm logical volumes on a raid array during boot-up. the problem manifested itself after a failed preupgrade to fedora 12; however, i think the attempt at upgrading to fc12 might not have anything to do with the problem, since i last rebooted the server over 250 days ago (sometime soon after the last fedora 11 kernel update). prior to the last reboot, i had successfully rebooted many times (usually after kernel updates) without any problems. i'm pretty sure the fc12 upgrade attempt didn't touch any of the existing files, since it hung on the dependency checking of the fc12 packages. when i try to reboot into my existing fedora 11 installation, though, i get the following screen: (click for full size) a description of the server filesystem (partitions may be different sizes now due to the growing of logical volumes):
Code:
- 250GB system drive
250MB/dev/sdh1/bootext3
lvm partition rest of driveVolGroup_System
10240VolGroup_System-LogVol_root/ext4
[code]....
except he's talking about fake raid and dmraid, whereas my raid is linux software raid using mdadm. this machine is a headless server which acts as my home file, mail, and web server. it also runs mythtv with four hd tuners. i connect remotely to the server using nx or vnc to run applications directly on the server. i also run an xp professional desktop in a qemu virtual machine on the server for times when i need to use windows. so needless to say, it's a major inconvenience to have the machine down.
View 1 Replies
View Related
Apr 11, 2010
I wanted to merge my 1TB disks into and RAID 5 array, 4 of them in RAID 5 is above 2Terabytes limit of msdos partition tables which grub2 can boot from, so I decided to start up the system from scratch, by building it on GPT partitions, but seems grub2 won't boot from GPT partition because it drops to grub rescue and I can't really do anything from there.
here's my set up:
/dev/md0 (raid 1) - 100MB total:
- dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1
/dev/md1 (raid 5) - 45GB total:
- dev/sda2, /dev/sdb2, /dev/sdc2, /dev/sdd2
/dev/md2 (raid 5) - something bit lower than 3TB:
- dev/sda3, /dev/sdb3, /dev/sdc3, /dev/sdd3
any tips how to have this system up and running? Because I've spent like 3 days jumping over various problems
View 8 Replies
View Related
Jul 8, 2010
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
View 1 Replies
View Related
Feb 3, 2011
I'm trying to switch to a new RAID5 array but can't get it to boot. My disks:/dev/sda: new RAID member
/dev/sdb: Windows disk
/dev/sdc: new RAID member
/dev/sdd: old disk, currently using /dev/sdd3 as /
The RAID array is /dev/md0, which is comprised of /dev/sda1 and /dev/sdc1. I have copied the contents of /dev/sdd3 to /dev/md0, and can mount /dev/md0 and chroot into it. I did this:
Code:
sudo mount /dev/md0 /mnt/raid
sudo mount --bind /dev /mnt/raid/dev
sudo mount --bind /proc /mnt/raid/proc
[code]....
This completes with no errors, and /boot/grub/grub.cfg looks correct[EDIT: No it doesn't. It has root='(md/0)' instead of root='(md0)']. For example, here's the first entry:
Code:
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Ubuntu, with Linux 2.6.35-25-generic' --class ubuntu --class gnu-linu
x --class gnu --class os {
[code]....
However, when I try to boot from /dev/sda, I get:
Code:
error: file not found
grub rescue>
View 9 Replies
View Related
Aug 10, 2010
I have an HTPC that was giving me insane amount of problems after 3 months of good use.
1x 250gig Samsung Drive (OS Drive)
3x 1TB Western Digital Caviar Green (Raid-5)
In 9.10, the raid was working fine. I decided to fresh install Ubuntu 10.04 and I can't seem to start the raid array. In Disk Utility, the array shows up but when I try to start it I get the error "Not enough components to start the array"
I've tried to assemble the array using mdadm and the following:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdd1
This returns the following error mdadm: failed to create /dev/md0
I have no idea what to do now, and unfortunately I don't have any backups
View 1 Replies
View Related
Aug 1, 2011
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
[code]....
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
View 6 Replies
View Related
Dec 2, 2009
I have one hard disk for my root partition and a disk array on a separate mount point. I rebuilt my disk array, but I didn't delete my original mount points beforehand because I was hoping it would just "pick up". So now when I boot up, the OS tells me that the filesytem check fails because it can't find the array to map to the mount point. I know that I need to edit my /etc/fstab and remove the line that defines my mount point on the disk array. But it appears to be read only filesystem when I am in repair mode. I can't force the write with vi.
View 3 Replies
View Related
Jan 23, 2011
I'm currently using Windows Vista 32-bit on a RAID 1 array; I'm using the RAID provided by my motherboard so it's fakeRAID. Anyway, I'd like to do some C development under Linux but I'm not exactly sure how to go about installing it on a software RAID 1 array without messing up Windows. I'm not sure which Linux distro I'm going to install, so I'm hoping that information isn't important. Would I just resize my Windows partition and put Linux on the newly created partition? Do I have to worry about where Linux will put its bootloader or will it manage that on its own? I didn't mean software RAID, I meant fakeRAID.
View 1 Replies
View Related
Nov 26, 2010
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
View 6 Replies
View Related
Apr 4, 2010
I'm about to install Ubuntu on two 250-gigabyte hard drives in a RAID 1 array, but I'm confused about how to partition my hard drives. How much space should I give to each partition? How many partitions should I create and where should I mount them? (I should mention that Ubuntu will be the only OS on this array.)
View 3 Replies
View Related
Jun 24, 2009
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
View 2 Replies
View Related
Aug 1, 2010
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
View 2 Replies
View Related
Jun 17, 2010
When I tried to install fedora13, f13's installer kept seeing my hard drivesas a "BIOS RAID set (mirrorred)".Ubuntu10.04's installer had the same problem withmy drives but that installer was less informative than f13's installer. U10.04's installerjust stalled in the first screen, the one with the word Ubuntu above five dots, giving no hintas to what it didn't like.My pc came from Dell with 2 identical SATA hard drives in a RAID level one array.I changed CMOS settings from "RAID ON" to "ON" for each hard drive. That did not dismantle the RAID configuration, at least notin a way that satisfied f13's installer.
I reinstalled xp and tried to install f13 after a minimal xp installation.f13's installer detected "BIOS RAID metadata."What is it that f13's installer is detecting?I thought this might have something to do with nVidia's nForce4 Serial ATA RAID controllers. These are installed when you install the version xp that came with my system, not like most other drivers which you install after xp.I contacted nVidia but theycouldn't help me with this.Well, it turns out to be Dell's fault. They place this "BIOS RAID metadata"in a special place on each hard drive of a RAID set. It survives even the formatting that accompanies a reinstallation of xp.
If you want to truly dismantle a manufacturer's RAID set, you must use software like "dban" (www.dban.org) to thoughly wipe clean the drives. Download dban, burned it to a cd, then boot that cd. dban's auto??? command didn't work for me but its dod command did the trick. The process took aout seven hours for each of my 160gb hard drives. Hey, i was real impressed by the help i got throughout all the twists and turns that took this problem far away from the original statement Special thanks to Scotty38and to the "Troy Polamalu looking" young man working at Best Buy here in Little Rock, he understands how manufacturer's are shipping their pcs.
View 3 Replies
View Related
Sep 15, 2010
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
View 4 Replies
View Related
Aug 28, 2010
I am using Ubuntu 10.04 x64. I am not trying to install Ubuntu on a RAID 1 drive like all of the guides are for. I have a RAID 1 array that I am using for data storage. In windows it shows as a single array just fine. In linux it shows as 2 separate drives. I don't care how they show up to be honest I just have to data written to one drive written to the other automatically as well so my RAID isn't screwed up. Looking through different articles and forums I find a lot of stuff saying that it should show up under /dev/mapper/dxxx or something under /dev/mapper. All that shows up there for me is a device called control which doesn't seem to do something.
View 1 Replies
View Related
Nov 4, 2010
I just installed Ubuntu 10.10 64bit and wanted to get access to my nvidia RAID array. This array is working, and is NTFS formatted. But wasn't showing up through normal means in Ubuntu. (for example the NTFS Configuration Tool didn't display it) Here's what the system showed.
Code:
root@hermes:~# ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 59 2010-11-03 22:39 control
lrwxrwxrwx 1 root root 7 2010-11-03 22:42 nvidia_dadijiag -> ../dm-0
[code]....
Is my mirror still in effect, or did i just mount one of the specific drives from the mirror?
View 1 Replies
View Related
Jan 24, 2011
I'm going a little bit crazy. I can't seem to remove my RAID 1 arrays. Any suggestions? I don't need to save data. The drives are empty. I'm upgrading to 4 2TB drives.Running Lucid Lynx server
Code:
jessica@nas:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb1[1]
976759936 blocks [2/1] [_U]
md0 : active raid1 sdd1[0]
[Code]...
View 5 Replies
View Related
Oct 19, 2010
Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?
View 4 Replies
View Related