I'm going a little bit crazy. I can't seem to remove my RAID 1 arrays. Any suggestions? I don't need to save data. The drives are empty. I'm upgrading to 4 2TB drives.Running Lucid Lynx server
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
I am using Ubuntu 10.04 x64. I am not trying to install Ubuntu on a RAID 1 drive like all of the guides are for. I have a RAID 1 array that I am using for data storage. In windows it shows as a single array just fine. In linux it shows as 2 separate drives. I don't care how they show up to be honest I just have to data written to one drive written to the other automatically as well so my RAID isn't screwed up. Looking through different articles and forums I find a lot of stuff saying that it should show up under /dev/mapper/dxxx or something under /dev/mapper. All that shows up there for me is a device called control which doesn't seem to do something.
I just installed Ubuntu 10.10 64bit and wanted to get access to my nvidia RAID array. This array is working, and is NTFS formatted. But wasn't showing up through normal means in Ubuntu. (for example the NTFS Configuration Tool didn't display it) Here's what the system showed.
Code:
root@hermes:~# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 2010-11-03 22:39 control lrwxrwxrwx 1 root root 7 2010-11-03 22:42 nvidia_dadijiag -> ../dm-0
[code]....
Is my mirror still in effect, or did i just mount one of the specific drives from the mirror?
Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I'm just about to be given a Power Mac G5 (Late 2005) Dual 2.0GHz. I think this was the last G5 produced.I plan on using it as file server/NAS and will probably run 10.04 LTS (or maybe 8.04 LTS). I would install a SATA RAID controller and run 4 1TB drives in a RAID 5 configuration. The only thing I'm unsure about is choosing a compatible RAID controller. I need to find a RAID controller that
- Is PCIe - Is compatible with both the Power Mac and Ubuntu PPC - Does true hardware RAID - Doesn't cost a fortune!
Am I right in thinking that the card might need to be open firmware compatible? If it makes any difference, I plan on running the OS from a separate 5th drive. I've found this on eBay. I asked the seller and he claims it supports true hardware RAID and says the chipset is a Silicon Image SIL3124. I does seem suspiciously cheap though...
After upgrading my ubuntu install my raid array is gone. The drives appear in blkid as "Linux raid member" and both have the same uuid. If I try to mount the drive via fstab I get a message that the drive is not ready or present. If I try to mount each of the two drives, one mounts successfully the other reports serious errors. Issuing a cat /proc/mdstat shows md_d0 as inactive.How can I re-establish my raid array? I have the data backed up so if I have to wipe out the disks to start over that's an option.
I currently have a nice HTPC setup that has been upgraded from distribution to distribution since 8.xx all the way up to 9.10 now. I just moved to a new place and it feels like the right time to do a fresh install of 10.04 into the HTPC. The problem is that I have a RAID 5 array in the system that has all my pictures, videos, music, etc. This OS is installed in a separate drive that is not part of the RAID array (I have 4 drives in the system, 3 in the array, 1 for the OS). what is the general process I should follow to do:
1. a fresh install of 10.04
2. do #1 while at the same time not losing my array (don't think I would anyway).
3. what to do after install to get the array back up and running and mounted.
I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters
repairing the MBR on my raid array. I have three disks, each with three paritions:root (sda1 sdb1 sdc1) 59GB swap (sda2 sdb2 sdc2) 1.12GB grub/boot (sda3 sdb3 sdc3) 298MB I have been able to get this running and it has been working fine for several months. A few days ago, I installed 10.04 to a USB stick but did not disable the hard drives at that point and so the MBR was overwritten. If I leave the USB stick in, it boots fine from that stick. However now I can't get the boot from the raid array to work correctly. I can do the following:Load 10.04 from the Live CD install mdadm recreate the root partition using
I can mount and view the files on md0 with no problems. It's not corrupted in any way. When I installed, I followed the directions to make each of the grub drives bootable. However I don't know for sure whether grub was installed on each partition separately or if it was installed on the assembled partition only. I have tried using
Code:
sudo grub-install /dev/sda3
and got warnings, something to the effect
Code:
Cannot find a device for /boot/grub no path or device specified Auto-detection of a filesystem module failed specify the module with option '--module' explicitly
I have also been able to get to the grub rescue prompt but my keyboard (wireless USB) is not recognized and so I can't type anything in at that point.
I've created a soft raid w/ 2x 1TB drives by using Disk Utility. (used for mythtv storage, not system, swap or anything else)
If I start the raid array in the Disk Utility mount and everything works as it should, but on reboot the raid array isn't started so I have to go into Disk Utility, start it, then run the mount.
I have acquired a dell 2850 poweredge server and installed ubuntu server onto only to find out we cant use linux for its intended use and need to uninstall remove ubuntu and it has a raid 5 array on the server.
I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).
My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:
Code: (parted) p Model: Areca ARC-1280-VOL#00 (scsi)[code].....
What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.
I had some issues with my RAID6 array (with 15 disks), where 5 disks got disconnected (each five disks is connected to the motherboard via 1 SATA cable), which brought down the RAID array. I fixed this problem via readding the disks and running the array:
Code:
mdadm -R /dev/md1
However, after rebooting, the array appears inactive, and I have to go through the same motions to fix this and make it active. The array is present in the /etc/mdadm/mdadm.conf, though also 2 other raid arrays (3 arrays total):
Code:
DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes
I currently have a mirror raid array with 2x 1TB HD's (Western Digital Caviar Green WD10EADS SATA). They are both the standard 512 byte sector HD's. I just ordered a new 1TB Hard Drive (Western Digital Caviar Green WD10EARS SATA) as I want to wipe the current mirror raid and use the 3 disks in a raid 5 array.
But to my suprise, the new 1TB HD is an "advanced format" HD with 4k byte sectors... Am I screwed or can this 4k byte sector HD be used in a software raid 5 array along with the 512 byte sector hard drives?
(The WD hard drive says you can jumper it to work with XP which can only use 512 byte sectors... there's also a utility you can run that does something to make it compatible with XP too.... Makes me wonder if these new drives are backwards compatible?)
I wanted to merge my 1TB disks into and RAID 5 array, 4 of them in RAID 5 is above 2Terabytes limit of msdos partition tables which grub2 can boot from, so I decided to start up the system from scratch, by building it on GPT partitions, but seems grub2 won't boot from GPT partition because it drops to grub rescue and I can't really do anything from there.
I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.
The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:
Both mirror disks are still functioning and I can still add/delete files, from the server, from other LINUX systems and from other Windows systems via samba.
so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.
I had run out of space on a 30GB raid1 partition so I rebuilt it as I had three 30GB partitions on a pair of 120GB drives. I stopped the array and used gparted to create a single 120GB partition on one of the disks. I then copied the data from all three partitions on the other drive onto this newly cleared space, before zapping that too. So I now had two drives with identical single partitions, one of which had all my data and lots of space. I used mdadm to create a new array from the two disks, which looked good. It immediately started to resync the data onto the second drive. However, when I tried to mount the new array this is what I get:
Code:
bob@zaphod:~$ sudo mount -t ext4 /dev/md0 /media/raid/ mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error
[code]....
I can mount the drives independently and GOOD NEWS the data is there! However, how can I get round this problem and mount the raid array.
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
[code]...
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?
I am running Ubuntu 10.10 64. I have a RAID array consisting of two 1 TB HDD's, controlled by my on-board RAID controller. I have a dual-boot of Ubuntu 10.10 and Windows. The RAID array is mapped in /dev/mapper, and here is the output of sudo dmraid -ay
Code: RAID set "pdc_dedfhcfdee" already active RAID set "pdc_dedfhcfdee1" already active RAID set "pdc_dedfhcfdee2" already active RAID set "pdc_dedfhcfdee3" already active
I got my system up and running with the Grub installed on my primary hard drive. I have not installed 2 additional drives. I would like to combine the 2 additional drives into a RAID 1 array. I can only find tutorials on how to do this during initial install. I cannot find one that tell me how to do it after the install. Is there a way?
In Zentyal, it showed me active drives, if the array was degraded, and if there was any syncing happening when the array was building. How can I check that without Zentyal? Is there a terminal command or an application I can install to tell me the same information?
I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:
The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.
As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know
1) why this broke in an upgrade and is it a known problem? 2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output: $ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G Create a RAID set with ISW metadata format RAID name: BackupFull6 RAID type: RAID5 RAID size: 5589G (11720982528 blocks) RAID strip: 64k (128 blocks) DISKS: /dev/sde, /dev/sdf, /dev/sdg About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s *** Group superset isw_cdjhcaegij --> Subset name: isw_cdjhcaegij_BackupFull6 size : 3131048448 stride : 128 type : raid5_la status : ok subsets: 0 devs : 3 spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is: Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid