Round two:I am trying to install a RAID 1 array on my system. I already have another RAID 1 array in there. I am using the BIOS RAID option to set up the array.Here's what dmraid -r tells me:
I need to mount my raid array on CentOS 5.2 samba server.
Here are my hardware specs: Motherboard: Tyan S2510 LE dual PIII CPU's: Intel PIII 850ghz socket 370 Memory: 4 gig Crucial 133 ECC SDRAM OS: 2 x'x IBM Travelstar 6.4 gig 2.5 hard drives, (low heat/noise) Storage: 4 x's Seagate 500 gig IDE 7200 rpm RAID controller: 3Ware 7500-12 controller, (RAID 5) (66 mhz PCI bus) NIC: 3COM 3C996B-T gigabit NIC, (66 mhz PCI bus)
I have the 2 IBM's set as RAID 1, (mirror) and the 4 Seagates as RAID 5, (1.5 TB) I have installed the OS with minor problems, (motherboard doesn't like the 2.6.18-128.1.14.el5 kernel, removed it from my grub.conf).
My problem is mounting the RAID array. I have done the following: formatted with fdisk; fdisk /dev/sdb Then formatted with the following command; mkfs.ext3 -m 0 /dev/sdb
The hard drive was formatted with the ext3 files system, but I have mounted it as an ext2 file system as I don't want 'journaling' to occur. I then edited my /ect/fstab like this: .....
Then: mount -a When I go into my "home" directory and type ls, I get the following: [root@hydra home]# ls -l total 24 drwx------ 2 zog zog 4096 Jun 23 15:50 zog lrwxrwxrwx 1 root root 6 Jun 23 15:46 home -> /home/ drwxrwxrwx 2 root root 16384 Jun 23 15:34 lost+found drwxr-xr-x 2 root root 4096 Jun 23 17:18 tmp Why my home directory is showing under home?
I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:
The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.
As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know
1) why this broke in an upgrade and is it a known problem? 2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
Code:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file.
[code]....
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":
I'm not able to spin down my harddisks in raid array. I did hdparm -S 240 /dev/sdb and the same for the second hdd. Smaller values like 2 minutes work. I don't have log files on this disks. How can I determine, which processes are accessing these disks? I have apache and samba running. Nobody is accessing the server.
I have a raid 1 array which I did setup on a Centos box. I can configure the array fine the problem is every time is restart my machine I can no longer see the array and have to go create it all over again. I tried doing a few searches on it but have came up with nothing so far.
I've been having troubles with software raid. In particular, the raid array becomes un "assembleable" after reboots. The config is CentOS 5, 4 sata discs (one by 160 containing OS, no raid and 3 2TB disks configured as a RAID 5 array - no spare drive). These drives were configured in anaconda and all seemed to go well (the drive and its lvm partitions worked and it finished rebuilding overnight). A couple of reboots later the drives cannot be assembled anymore and the machine won't boot. The error message says:
mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.
Of course there are 3 drives and no spares in the array as configured. Manually starting the array with mdadm --assemble --scan gives the same message as does assembling the drive by specifying the individual parts. /proc/mdstat does recognize the 3 drives and when I look at the partition tables in fdisk, they show as being software raid. What could be wrong or steps to diagnose? I tried configuring the raid drives manually before going the anaconda route. Also, does anyone know I can edit the /etc/fstab file to disable them so the machine will at least boot. The (Repair filesystem) shell has the / drive mounted r/o.
Our server is a CybertronPC I2XV9080 Imperium Tower. It is equipped with a supermicro X7DVL-I Motherboard and Quad 750 GB SATA2 RAID edition hard drives in a raid 5 array. We tried to install Centos on the Raid5 array with Device-Mapper as the LVM. In the BIOS SATA Raid was enabled and the ICH RAID code base option was set to [Intel].
Intel Matrix Storage Manager Option ROM V5.6.4.1002 ESB2 RAID ID Name Level Strip Size Status Bootable 0 Raid5 Raid 5 64KB 80GB Normal Yes 1 Raid_5 Raid 5 64kB 2000GB Normal Yes[code].....
Can I have multiple level raids across the same array or would that lead to problems as above? Is the root cause of my problem the fact that intel raid5 is not supported for Linux as based on the following link http:[url]....
I have a home samba server with a 3ware Escalade 8506-8. I have 5 x 500 gig hard drives in a RAID 5 array. Recently, my 8506 died and I need to get a new one. However, I saw a 3ware Escalade 9500S-12 on ebay for about $20.00 dollars more than a replacement 8506-8.
My question is, if I put my drives on the 9500S, will it recognize my existing RAID array? Or will it want to build a new RAID array and format all of my data?Hope I have asked this question clearly, little short on sleep this week.
I have a 10x2tb disk array that i'm trying to build into a single software raid 5 i have tried this 2 times now the first it made it to 58.7% and the machine locked up and the array would not restart after a reboot. On my 2nd try all was looking good until about 50% i noticed that the speed dropped in 1/2 and that ksoftirqd/2 is taking up a lot of cpu (about 90%) the md0_resync and md0_raid5 are also taking 60-90% when the build started they took 7%. when i do a dmesg i see a lot of the message compute_blocknr: map not correct.
For a little info on the physical setup this is running on an Atom 510 with 2GB of mem the drives are connected to an addonics 4-Port RAID 5 / JBOD SATA II PCI Controller using the sil3124 chipset. I'm using 2 addonics 5X1 SATA Port Multiplier connected to the controller to get the 10 drives attached. All drive show up and don't seem to have any issues. I'm running a fully updated as of 3/20/10 version of centos 5.4
I will let this continue to run over night but i expect it to be locked up by morning if it follows what the last attempt did.
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I've just setup a new centos 5.3 server with a 3-disc raid 5 software RAID array. I've setup other software raid 5 arrays on this same hardware, whilst testing, and had no trouble... I only just installed 3 new drives and performed a new install from scratch.
Hardware is: 4800XP X2 64-bit, 2GB RAM, Albatron KI-690G mainboard with Marvel SATA controller (I think) - 4 ports.
SATA port 0 is system drive (OCZ vertex 32GB SSD)
SATA port 1 is Western Digital "green" 1TB 8MB cache SATA 2 SATA port 2 is Western Digital "green" 1TB 8MB cache SATA 2 SATA port 3 is Western Digital "green" 1TB 8MB cache SATA 2
Ports 1-3 in software raid MD0 (raid 5)
All of this was configured in the GUI setup, with LVM on top of software RAID 5 mounted on /var. Partition size is ~1.9TB
Trouble is I get all sorts of "hardware failure" messages at bootup and the MD driver reports it was only able to bring up 2 out of 3 drives in the RAID set... however the RAID set formatted fine during the setup?
So I didn't notice when I setup my CentOS 5.5 server that I left / as RAID 0 on md1. All the rest are RAID 1. Is there a way I can modify the array to RAID 1 without a risk of data loss? I'm glad I caught this before I setup any other services. I've only setup smb so far...
[root@ftpserver ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 16G 3.0G 13G 20% /
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
Yesterday I created a raid5 array /dev/md0 consisting of 5 harddisks, named sda thru sde on the time of creation.After that I stored some data into the arry without any difficulties, then shutdown the computer.Early this morning when starting the computer I got a message that /dev/md0 was not ready to be mounted.So I checked the raid array and discovered that the enumerator had been messing with the harddisks. Harddisk sda was now sdc etc. etc.After I rebooted, the harddisks got the original names again: sda was sda again.When I mounted the array no problems occurred.So, it seems that the order in which the harddisks are enumerated influences the availability of the raid array. Is there a way to avoid this kind of problems with a raid array?
Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?
I am using Ubuntu 10.04 x64. I am not trying to install Ubuntu on a RAID 1 drive like all of the guides are for. I have a RAID 1 array that I am using for data storage. In windows it shows as a single array just fine. In linux it shows as 2 separate drives. I don't care how they show up to be honest I just have to data written to one drive written to the other automatically as well so my RAID isn't screwed up. Looking through different articles and forums I find a lot of stuff saying that it should show up under /dev/mapper/dxxx or something under /dev/mapper. All that shows up there for me is a device called control which doesn't seem to do something.
I just installed Ubuntu 10.10 64bit and wanted to get access to my nvidia RAID array. This array is working, and is NTFS formatted. But wasn't showing up through normal means in Ubuntu. (for example the NTFS Configuration Tool didn't display it) Here's what the system showed.
Code:
root@hermes:~# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 2010-11-03 22:39 control lrwxrwxrwx 1 root root 7 2010-11-03 22:42 nvidia_dadijiag -> ../dm-0
[code]....
Is my mirror still in effect, or did i just mount one of the specific drives from the mirror?
I'm going a little bit crazy. I can't seem to remove my RAID 1 arrays. Any suggestions? I don't need to save data. The drives are empty. I'm upgrading to 4 2TB drives.Running Lucid Lynx server
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing
Just setup with Debian 8 (LXDE) a few weeks ago. Raid10 array was preexisting.
Was working well. After booting I would need to go to the save as then would need to enter the root password and everything would be good.
Can't access the array.
Used to use the command $ mount /dev/dm-o /home/myspace/folder under Debian 7.6 to mount the array (no longer works). blkd lists a /dev/md0 but instead of UUID it is PTUUID
I installed RC1, and never touched the disks that were raided previously, and the attached image is what i see for my raid array that was previously working fine.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I'm just about to be given a Power Mac G5 (Late 2005) Dual 2.0GHz. I think this was the last G5 produced.I plan on using it as file server/NAS and will probably run 10.04 LTS (or maybe 8.04 LTS). I would install a SATA RAID controller and run 4 1TB drives in a RAID 5 configuration. The only thing I'm unsure about is choosing a compatible RAID controller. I need to find a RAID controller that
- Is PCIe - Is compatible with both the Power Mac and Ubuntu PPC - Does true hardware RAID - Doesn't cost a fortune!
Am I right in thinking that the card might need to be open firmware compatible? If it makes any difference, I plan on running the OS from a separate 5th drive. I've found this on eBay. I asked the seller and he claims it supports true hardware RAID and says the chipset is a Silicon Image SIL3124. I does seem suspiciously cheap though...
After upgrading my ubuntu install my raid array is gone. The drives appear in blkid as "Linux raid member" and both have the same uuid. If I try to mount the drive via fstab I get a message that the drive is not ready or present. If I try to mount each of the two drives, one mounts successfully the other reports serious errors. Issuing a cat /proc/mdstat shows md_d0 as inactive.How can I re-establish my raid array? I have the data backed up so if I have to wipe out the disks to start over that's an option.
I currently have a nice HTPC setup that has been upgraded from distribution to distribution since 8.xx all the way up to 9.10 now. I just moved to a new place and it feels like the right time to do a fresh install of 10.04 into the HTPC. The problem is that I have a RAID 5 array in the system that has all my pictures, videos, music, etc. This OS is installed in a separate drive that is not part of the RAID array (I have 4 drives in the system, 3 in the array, 1 for the OS). what is the general process I should follow to do:
1. a fresh install of 10.04
2. do #1 while at the same time not losing my array (don't think I would anyway).
3. what to do after install to get the array back up and running and mounted.
I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters
repairing the MBR on my raid array. I have three disks, each with three paritions:root (sda1 sdb1 sdc1) 59GB swap (sda2 sdb2 sdc2) 1.12GB grub/boot (sda3 sdb3 sdc3) 298MB I have been able to get this running and it has been working fine for several months. A few days ago, I installed 10.04 to a USB stick but did not disable the hard drives at that point and so the MBR was overwritten. If I leave the USB stick in, it boots fine from that stick. However now I can't get the boot from the raid array to work correctly. I can do the following:Load 10.04 from the Live CD install mdadm recreate the root partition using
I can mount and view the files on md0 with no problems. It's not corrupted in any way. When I installed, I followed the directions to make each of the grub drives bootable. However I don't know for sure whether grub was installed on each partition separately or if it was installed on the assembled partition only. I have tried using
Code:
sudo grub-install /dev/sda3
and got warnings, something to the effect
Code:
Cannot find a device for /boot/grub no path or device specified Auto-detection of a filesystem module failed specify the module with option '--module' explicitly
I have also been able to get to the grub rescue prompt but my keyboard (wireless USB) is not recognized and so I can't type anything in at that point.