Hardware :: System Device Name For A RAID Of SAS Drives?
Jun 29, 2010For SATA drives in AHCI mode, the names are /dev/sdX, what about a RAID-0 or RAID-1 array of SAS drives? Are they the same?
View 8 RepliesFor SATA drives in AHCI mode, the names are /dev/sdX, what about a RAID-0 or RAID-1 array of SAS drives? Are they the same?
View 8 RepliesI just installed Debian 5.0.4 successfully. I want to use the PC as a File Server with two Drives configured as a RAID 1 device. Everything with the RAID device works fine, the only question I have belogs to the GRUB 0.97 Booloader. I would like to be able to boot my Server even if one of the disks fail or the filesystem containing the OS becomes corrupt, so I configured only the data partitions to be a RAID 1 device, so on the second disk should be a copy of the last stable installation, similar to this guide:[URL]...
[Code]...
I am trying to set up a mdadm raid in a new PC that I am building for home theatre. the machine boot just fine from /dev/sdc running ubuntu 9.10 However in gparted /dev/sda and dev/sdb show to be part of /devmapper/sil_ajbicfacbaej Both dev/sda and /dev/sdb were drives that used to be part of a sil hardware raid on a previous machine. I would like to use them as a new mdadm raid on this new machine the old hardware card was really quite slow. the drives are now pluged into the MB and should bw much faster there.
fdisk -l shows this
*********************************************** ~$ sudo fdisk -l
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
[code]....
I have (had) Debian Testing running on a 250GB IDE hard drive, partitioned normally.
I also have 4x 1TB drives in a raid 5 using mdadm, and 2x 500GB drives in a raid 1 also with mdadm.
I put the two arrays in lvm using:
I then used "lvcreate" to make storage/backup 300GB, and the rest went to storage/media (approx. 2TB usable). I put an xfs filesystem on both and mounted them.
All was working fine until the system drive shorted out and died on me this morning. As far as I can tell, all my other drives and everything else is fine. I do a daily rsnapshot of the filesystem, which of course is residing on storage/backup (stupid, I know). So I have full backups of everything, but I'll have to put a new hard drive in and reinstall Debian before I can restore everything.
I've reinstalled before and simply reassembled mdadm arrays and remounted them before with no problems, but this is the first time I've used lvm, so I'm not sure what I have to do to restore everything. Is it as simple as reinstalling the system then doing a:
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
View 1 Replies View RelatedI'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
root@wolfden:~# cat /proc/mdstat
Personalities : [raid0] [raid10]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
[code]....
Having problems with external hard drives. I may be wrong, but I suspect they originated with an upgrade to 10.04 last Christmas. Around that time I also started using Amazon's S3 storage system, and, as a consequence, I stopped using my WD 80G external drive, previously used to backup my important files.
A week or so ago I decided to start using the WD drive again. I can't remember exactly what I did, but it wasn't happy - never caused any problems before. When I plug it in, the on-off light as the front keeps flashing on and off, and when I try to remove the drive I get the message: Error unmounting volumne An error occured while performing an operation on "My Book" (Partition 1 of WD 800BB External): The device is busy
Details: Cannot unmount because file system on device is busy Assuming the device had died - it's about 5 years old - I bought a 160G Samsung S-Series drive - my but they do look neat! Unfortunately, this doesn't seem to have solved my problem. I plugged the new drive in, and it happily appeared on my desktop. It seemed a good idea at the time, but I then started to format the drive - using the default option of FAT. All went well at first, but then the format process stopped.
My new Samsung drive now seems to be operating pretty much as the WD device, I can't copy to the drive, and attempts to unmount it generate a response similar to what's happening with the WD drive. Currently, although plugged in, I can't see the drive on my desktop, although it appears under Places. However, when I try to mount the drive, I get the message: Unable to mount SAMSUNG A job is pending on /dev/sdb1
Here is my system: I have dell poweredge 1950 PERC 6 with 300 GB raid system. It has two disks of each 300GB RAID mirrored system. I have few applications and data that reached around 280GB. As you know, poweredge 1950 we can have only two disk.
They are not mission critical. Hence, I wanted to remove the raid system and use as a non-raid system. By doing it, The applications and data can grow upto 600GB. I do not want to loose the data and setup. I am not so clear about RAID system and its conversion.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 Replies View RelatedI have two drive on my Ubuntu server. I have a 40GB IDE 5400RPM and an 80GB SATA 7200RPM.
Is it at all possible to RAID 0 the two of these drive? I know it's pretty unorthodox, but it's what I've got without having to buy anything.
If so, what are the limits, cons, and/or potential pros of doing a RAID of these two?
I have two 1TB hard drives in a RAID 1 (mirroring) array. I would like to add a third 1TB drive and create a RAID 5 with the 3 drives for a 2TB system. I have ubuntu installed on a separate drive. Is it possible to convert my RAID 1 system to a RAID 5 without losing the data? Is there a better solution?
View 1 Replies View RelatedI have a RAID 1 that is mounted and working. But for some reason I can also see the individual drives under gnome Devices on gnome-shell. Is there a way to hide them from gnome or linux in general. (So only the raid 1 can be seen)
View 2 Replies View RelatedI was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error:
fsck: fsck.isw_raid_member: not found
fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb
and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
View 9 Replies View RelatedI have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I'm using 4 hard drives (1 of which is a sata drive) and i need help installing raid drivers i cant get these hard drives to mount at all
View 3 Replies View RelatedI've got a 10.10 installation, which I am using as a media/download server. Currently everything is stored on a 1TB USB drive.With the costs of disks falling, and the hassle of trying to back 1TB up to DVD (no, it's not going to happen) I was wondering if there's some linux/Ubuntu utility, which can use multiple disks to provide failover/resilience ... Could I just buy another 1TB drive, and have it "shadowing" the main, so that if one goes, I buy another, and then restore from the copy ?
View 3 Replies View RelatedI have a motherboard with the following chipset Intel 945GM + Intel ICH7R Chipset
This board: http://emea.kontron.com/products/boa...6lcdmmitx.html
I have two 320GB HDDs setup in hardware raid as shown below. But in gparted they are showing as two seperate drives. Why is this?
Raid setup:
GParted
fdisk -l
What am i doing wrong here?
I have a RAID 6 built on 6x 250GB HDDs w/EXT4. I will be upgrading the RAID to 4 2TB HDDs.
How would one go about this? What commands would need to be ran? I'm thinking about replacing the drives 1 at a time and letting it do the rebuild, but I know that would take a lot of time (which is fine). I don't have enough SATA ports to setup the new RAID and copy things over.
My two 400G drives which have been added to a single 800G array (per fastbuild bios utility) still show up as two 400G drives in fdisk. Why is that?
View 1 Replies View RelatedI know I can simply create a degraded raid array and copy the data to the other drive like this: mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
But I want the specific disk to keep the raw ext3 filesystem so I can still use it from FreeBSD. When using the command above the disk will be a raid disk and I can't do a mount /dev/sdb1 anymore. A little background info. The drives in question are used as backup drives for a couple of Linux and FreeBSD servers. I am using the Ext3 filesystem to make sure I can quickly recover the data since both FreeBSD and Linux can read from that without problems. If someone has a different solution for that (2 drives in raid 1 that are readable by FreeBSD and writeable by Linux),
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
Distributor ID: Ubuntu
Description: Ubuntu lucid (development branch)
Release: 10.04
Codename: lucid
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
I'm looking for advise on which drives to add into my server for software raid 5. I would like to use 2TB drives for the array. The server currently boots off a RAID 1 array and I have a couple other drives mounted until I build a RAID 5 array with new drives. I've read horror stories on using Western Digital WD20EARS and Seagate ST32000542AS. So I'm wondering which large drives are best to use in software raid?
View 9 Replies View RelatedI got my system up and running with the Grub installed on my primary hard drive. I have not installed 2 additional drives. I would like to combine the 2 additional drives into a RAID 1 array. I can only find tutorials on how to do this during initial install. I cannot find one that tell me how to do it after the install. Is there a way?
View 3 Replies View RelatedI am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output:
$ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G
Create a RAID set with ISW metadata format
RAID name: BackupFull6
RAID type: RAID5
RAID size: 5589G (11720982528 blocks)
RAID strip: 64k (128 blocks)
DISKS: /dev/sde, /dev/sdf, /dev/sdg
About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s
*** Group superset isw_cdjhcaegij
--> Subset
name: isw_cdjhcaegij_BackupFull6
size : 3131048448
stride : 128
type : raid5_la
status : ok
subsets: 0
devs : 3
spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is:
Description: Ubuntu 10.04.2 LTS
Release: 10.04
Codename: lucid
I'm renting a dedicated server with a company that claims that the server has 2 hard drives in a software RAID 1 array, but I need to make sure that the server really has the 2 HDD, and the size of the 2nd drive... how to do that ?? system is Centos 5.3
View 1 Replies View RelatedI am currently trying to configure a set of hard drives as a RAID configuration. My system is running with Red Hat Enterprise Linux Client release 5.1 as the base OS. I am booting from CD. I am trying to image a set of drives that have not been imaged before. When the GUI dialog window for disk setup is displayed, it shows a default disk layout including a LVM slice. In the disk layout is a /boot partition already. It is not what I would like so I edit it to be the size for my system and make it the primary partition. I also select it to be a software RAID. I then add three more partitions for my drive 'A' all of type software RAID and NOT primary partitions.
At this point my drives have the correct number of partitions except for showing the LVM slice. I select 'RAID' again, followed by selecting 'Clone a drive to create a RAID device ...' followed by 'OK'. I then get a dialog to select the source and target. i select my drive 'A' to be the source and 'B' to be the target followed by 'OK'. An error dialog is received stating that all the partitions are not of type software RAID. The disk partitions are all type software RAID except the extended LVM slice. I can not get past this point and I am following a procedure written some time ago by a person that is not available.
I've finally found a couple of useful tutorials on setting up RAID in Linux. However, because this is new ground to me, I have a couple of basic questions which I think the tutorial writers gloss over because of their familiarity with the process. My questions are these:
1. Most tutorials speak about setting up only one partition on clean drives. Can I set up more than one (e.g. / and home) to be mirrored as two partitions?
2. When starting with two identical clean drives, do I need to set up my partitions identically on both drives or is it only the partitions that I want mirrored to the second drive?