Fedora :: Can I Use RAID 0 Configuration Because Of Their Reliability
Dec 17, 2010If we want to use SSD drives on a database server, can we use RAID 0 configuration because of their reliability?
View 1 RepliesIf we want to use SSD drives on a database server, can we use RAID 0 configuration because of their reliability?
View 1 RepliesI am currently trying to configure a set of hard drives as a RAID configuration. My system is running with Red Hat Enterprise Linux Client release 5.1 as the base OS. I am booting from CD. I am trying to image a set of drives that have not been imaged before. When the GUI dialog window for disk setup is displayed, it shows a default disk layout including a LVM slice. In the disk layout is a /boot partition already. It is not what I would like so I edit it to be the size for my system and make it the primary partition. I also select it to be a software RAID. I then add three more partitions for my drive 'A' all of type software RAID and NOT primary partitions.
At this point my drives have the correct number of partitions except for showing the LVM slice. I select 'RAID' again, followed by selecting 'Clone a drive to create a RAID device ...' followed by 'OK'. I then get a dialog to select the source and target. i select my drive 'A' to be the source and 'B' to be the target followed by 'OK'. An error dialog is received stating that all the partitions are not of type software RAID. The disk partitions are all type software RAID except the extended LVM slice. I can not get past this point and I am following a procedure written some time ago by a person that is not available.
I just went out and bought stuff to build a new computer, and among the parts was a Gigabyte ga-890fxa-ud5 motherboard ([URL]). The board has 3 (well, 4, but we'll stick to the 3) main sata interfaces, with 2 slots per interface, allowing 6 sata drives. In slot_0 i put my blu-ray drive, in slot_1 i put my drive that will host the OS and its partitions, and that is in the sata connector pair on the left. The middle sata connector pair (slot_2 and slot_3) i have 2 2tb drives, and in the sata connector pair on the right (slot_4 and slot_5) i have 2 1.5tb drives.
[Code]....
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
We've started using Debian based servers more and more in work and are getting the hang of it more and more every day. Right now I'm an ace at setting up partitions, software RAID and LVM volumes etc through the installer, but if I ever need to do the same thing once the system's up and running then I become unstuck.
Is there any way I can get to partman post-install, or any similar tools that do the same thing? Or failing that are there any simple guides to doing these things through the various command line tools?
is it possible to use software RAID in this configuration?
160GB
sda1=sdb1
600GB
sda2
so have two HDD's mirroring each other, however one of the mirrored volumes is smaller than the other, and then use the rest of the larger disk as an unmirrored partition?
Just setup with Debian 8 (LXDE) a few weeks ago. Raid10 array was preexisting.
Was working well. After booting I would need to go to the save as then would need to enter the root password and everything would be good.
Can't access the array.
Used to use the command $ mount /dev/dm-o /home/myspace/folder under Debian 7.6 to mount the array (no longer works).
blkd lists a /dev/md0 but instead of UUID it is PTUUID
[Code] .....
I have 2x 1.5TB hard disks and I'm going to buy a new 2TB drive soon. First though I just wanted to check that I could partition off the first 1/4 to 1/3 of the 2TB drive (leaving 1.5TB or more free) and install Debian to that part, then use the remainder of the disk in combination with the 2x 1.5 TB drives in RAID 5? i.e. can you mix whole drives and with partitions from other drives in RAID 5 and/or is it best to just stick with complete drives for the RAID array?I only have room for 3 drives in the small mATX case that houses my NAS device and I want to maximise storage capacity and minimise expense.
View 2 Replies View RelatedI am rebuilding two microsystems servers and I need some advice to make my dreams come true.I want to setup the servers in a RAID configuration and want to install a GUI Linux application to manage a file server, manage a subnet, and host a Moodle on my subnet.I am planning to use Asus eee netbooks running Linux as my client computers. I basically need to be able to get my kids on the web and be able to have them use some open source office suite tools. No major crunching. I'll have two Macs for that.
View 3 Replies View RelatedI'm building a NAS, based on the Intel SS4200. There are 4 drive bays in the machine for use with SATA disks, two of which I plan on filling now, the other two which I plan on filling later. The box also includes an IDE connector to which I will connect an 8GB Disk on Module onto which I will install Slackware. I wish to have all drives in the box show up as one contiguous volume. What partitioning/LVM/RAID configuration can I use which will allow me to:
1. Add a disk and transparently grow the available space of the volume?
2. Replace a disk with a larger disk and transparently grow the available space of the volume?
3. Lose a disk to hardware failure and replace it with a new one with no data loss?
If I use RAID 5, I'm pretty certain I can get numbers 1 and 3 above, but I'm not sure about number 2. The downside is that I'd have to start with 3 disks in the machine, and I'm unsure if adding a 4th disk whose size is larger than each of the 3 starting disks would lead to wasted space. For instance, if I start with three 1TB drives in RAID 5, and then add a 2TB 4th drive, would my available size go from 2TB to 3TB? Or from 2TB to 3.xTB?
Is it important in a RAID 5 setup to have all disks the same size? With LVM, I can certainly get number 1 above, but what about 2 and 3? I know you can use LVM to present many disks or partitions as one contiguous volume, but if I have two 1TB drives in one volume, and only have 300GB of data, then would the second drive remain empty until I broke the 1TB barrier? In this case, it's wasted space from the get go. I suppose another option would be to start with RAID 1 until I can afford a third disk.
When adding the new disk, could I switch to RAID 5 without data loss? I'm planning on maintaining a full mirror of the NAS on some USB disks as a backup, so if configuration changes to the NAS require wiping the disks and restoring from backup, it's not a total loss. However, it certainly makes me nervous to be in a state where only one copy of the data exists, so I'd rather find a solution where I can add and upgrade disks in the NAS without relying on the backup copies.
How can i get a RAID virtual image disk?
What i need is to mount several directories from any other partiton (or file system) as a new merge file system that can grow or decrease depending on the free space. As if it was a dinamic RAID,so i can work with huge files distributed over the partitions mounted.
Ejemp: /mnt/sda1/dir_raid1 + /home/dir_raid2 + /mnt/sda3/dir_raid3 ---> /mnt/RAID/
mhddfs and unionfs <---- are not the solution im searching (cant use huge files)
I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.
However, mdadm man page states:
"Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."
So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.
RAID CONFIGURATION:
~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 4 17:38:26 2006
Raid Level : raid1
Array Size : 238950720 (227.88 GiB 244.69 GB)
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
vs.
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
What's the advantage of using one over the other?
If you want, skip straight to the 'QUESTION' at the end of my post & refer to the 'EXPLANATION' later. EXPLANATION: Using Debian 6.01 Squeeze 64-bit. Just put together a brand new 3.3Ghz 6-core AMD. I had a nightmare with my Highpoint 640 raid controller, apparently because Debian Squeeze now handles raid through sysfs rather than /proc/scsi. The solution to this, of course, is to recompile the kernel with the appropriate module for /proc/scsi support. So I thought "screw that" and I've yanked out the raid card & went with Debians software raid. This allowed me to basically complete my mission. The raid is totally up and running, except for one final step... I can't get the raid to automount at boot.
My hardware setup;
- Debian is running totally on a 64Gb SSD. (sda)
- I have 3x 2Tb hard drives used for storage on a raid 1 array (sdc,sdd,sde)
[Code]....
New to linux in general and am having issues on setting up a Raid 1 array for two disks on an HP Proliant Microserver which I am looking to be accessible from my windows PC. I have installed the latest version of debian succesfully on a 250GB disk that came with the server. I have added 2 2TB disks which I would like to have in a RAID 1 array and to have visible from windows to store music/videos etc on. I have managed to partition the two disks to FAT32 (which I think is best) and have managed to configure the array so that it shows as active when I use cat /proc/mdstat. I have been following the steps in this article [URl]... squeeze-p2 and trying to adapt it to my situation.
I am stuck on the step to create the file systems using the mkfs command. I try mkfs.vfat /dev/md0 and it comes up with the error mkfs.vfat: command not found. I have tried mkfs -t vfat /dev/md0 and it give the error "mkfs.vfat: No such file or directory" So my question is how can I continue with the process of setting up the array? Or maybe I should be asking is it possible to set up an array with FAT32 formatted disks?
We have 4 HDs on our server. One of them broke last night. I could see a message on the server and after restarting the S.M.A.R.T. on the BIOS was recognizing one HD as bad. After removing the failing HD, the server is now up and running. I do not remember how I configured the HDs. During the installation I had a few problems and I change a few times what I wanted to do. I am sure I had at least a RAID0 with 2 disks but I could have put all the 4 disk in the RAID having 2 disks as spare drives or I may have created another volume for the other 2....
dmraid return: No raid disks
Code:
$ sudo dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
DEBUG: not isw at 500107860992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 500106779136 .....
no raid disks
WARN: unlocking /var/lock/dmraid/.lock
MountManager seems to report that sda and sdb belong to linux_raid_member.
However there is no mount point.
Questions:
1-How do I find how the disk were and are configured?
2-How can I find what was on the disk that died? (Was it a spare drive or one of the 2 in mirror)?
3-What do I need to do now to be sure that the mirroring is working OK? (considering that there is a spare drive). Do I need to use a command to let ubuntu mirror the drive on the new one?
4-What do I need to do when I get a replacement of the broken disk?
5-What is an utility that can show me easily how the disks are configured and eventually makes a change.
breaking a software RAID 1 disk array so I can remove one of the drives permanently. I've found a couple suggestions to make the drive 'forget' that it is a RAID drive by "zeroing out their md superblocks", but I don't know enough about superblocks to know if this is destructive or not. I used this guide to create my RAID-1 and it is working fine. This is my current setup:
Code:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
149864384 blocks [2/2] [UU]
[code]...
Do I need to boot from a live CD and perform the same commands on an unmounted filesystem?
I have created a filesystem /data using software RAID-1 concept using two disks.
/dev/sdb - 50 GB
/dev/sdc - 50 GB
now i have to increase the /data to 100GB by adding two more disk.
/dev/sdd - 50GB
/dev/sde - 50GB
is this possible, if yes
how exactly i can increase the existing RAID-1 array from 50GB to 100GB.
I'm having issues with a RAID array.
Setup is like this:
Debian Jessie, 2 hard disks, each having 2 partitions: /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2. Partitions were paired during installation, so they form /dev/md0 and /dev/md1. /dev/md0 is the root (/) partition, /dev/md1 is for /home.
At the end of the install process, I chose /dev/sda1 to carry Grub. And I think this is where I screwed things up.
After removing one of the hard drives, there was no boot capability. So, I installed Grub on /dev/sdb, too.
Now it displays the boot menu but cannot find the kernel. This is where I got lost in the process.
Do I need to reinstall the OS or is there a way to fix it? I suppose I have to edit Grub.
I just expanded my raid 5 array from 3*2TB to 4*2TB and mdadm made the grow successfully and shows an md0 dev with the size of 6TB usable data. Now my problem is that Debian (Lenny) dosnīt show the right amount. See below
######### MDADM DETAILS OF ARRAY ##########
> mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Dec 14 22:30:46 2009
Raid Level : raid5
[Code].....
Have my proxy running on Lenny and tried to upgrade to squeeze. Originally the system was installed on Etch and upgrading to Lenny was no problem. In the system i have two RAID1 volumes, md0 for / and md1 for /home. For upgrading i added the sources to my apt.conf and startet dist-upgrade. During the installation procedure, when installing udev I was advised to install the new kernel first and continue upgrade after booting the kernel. so I installed the kernel by "apt-get install linux-image-2.6-686. When generating initramfs there was a message, that there are no arrays defined in /etc/mdadm/mdadm.conf I took a look and there were none. mdadm seems to have been update before.
I then added the lines for RAID definition and added the data for UUID The UUID I got from the output of "mdadm --detail /dev/md0"
What I don't understand: blkid gives the same UUIDs for the first partitions of the RAID and a different UUID for /dev/md0 and /dev/md1 than mdadm --detail The update of initramfs for kernel 2.6.32 then gives this result:
update-initramfs -u -k 2.6.32-3-686
update-initramfs: Generating /boot/initrd.img-2.6.32-3-686
W: Possible missing firmware /lib/firmware/e100/d102e_ucode.bin for module e100
W: Possible missing firmware /lib/firmware/e100/d101s_ucode.bin for module e100
[code].....
I've got an 8-disk raid-5 setup, and one of the disks failed. I shut the system down, replaced it, and powered the box back on again. Then, I made a catastrophic mistake; I 'failed' and removed the wrong disk (should have been sdj1, and I typed sdk1 by accident). I tried to re-add sdk1 back to the raid array, but it got listed as 'spare'. My raid array is off-line, since I now have 2 disks unavailable.
I know that the data still exists on sdk1, is there any way I can get the raid array to recognise the fact that it's a valid part of the array, and not a spare disk? At least if I can do that, I'll have a degraded but accessible array, and then I can rebuild the array on the properly replaced disk.
I have three 640GB sata hard drives that I would like to put into a raid 5 configuration. I would like to opt for a software raid 5 so its hardware independent. I was trying to follow these instructions, but they seem a bit dated.
View 2 Replies View RelatedConfiguration:
Centos 5.4
2 Corsair F80 RAID 0 Intel Storage Matrix
ASUS P6x58D-E
Stripe Size = 128kB
Tried to run Centos 5.5 in Dual Boot Configuration with Windows 7. Windows Y installed without issues within minutes (amazing performance with the SSD's). Installed Centos 5.4 several things that wouldn't work right:
- wouldn't recognize the NTFS partitions. So I decided just to install Centos on the box. Completed installation and rebooted but it wouldn't boot up after the installation. Even in NON RAID configuration it would not boot of the SSD. Replaced SSD's with 2 Seagate Barracuda's 1TB in RAID 0 configuration and all went well.
I'm trying to install Debian Lenny on my new Dell XPS 8100 Desktop with 2 x 1To SATA HD. (No Windows or any other OS install is present on the system) The Bios allows me to change the SATA mode to either "ATA" or "RAID"
- When SATA mode is set to RAID, the installation goes without issues, but when it comes to load into the system, I've got that Stage 1.5 Grub Loading... Error 2 problem. I assume this is due to the Bios "RAID" configuration. I then switched the SATA mode to "ATA" in the Bios and now I can see the menu that allows me to boot my debian install but that part actually fails too saying "ALERT /dev/sda1 does not exist"
- When SATA mode was set to ATA, I tried to re-install the system but this time my drive was not recognized by the installer: "No common CD ROM drive"
I have two similar 1 TB hard disks and I'm planning to configure them using RAID. I want to back up reliably all my personal files so I'm going to use RAID1 for /home.
As the system files aren't irreplaceable, I was thinking to use RAID0 for the root directory.
My question is: will this give me any performance boost? As far as I know, using either RAID0 or RAID1 would double read speeds and thus in both configurations boot-up time (for example) would be equally fast. Is this true in practice? If both RAID0 and RAID1 have equal read speeds, I'm planning to use RAID0 only for /tmp and swap.
Another thing that I was wondering is that if I'm using RAID0 for / and another disk fails, will there be some difficulties to recover my home folder that was mirrored in RAID1? If I'm using RAID0 only for /tmp, am I able to recover whole system without reinstalling Ubuntu?
I have set up RHEL with single hard drive initially.But now i have installed new 5-hard drives of 1TB each.I have set it up as RAID 0 -Hardware RAID.The problem is linux detects and shows all hard drives as independent separate HDDS.I want to use LVM2 for software RAID 0 to show all hard drive volumes as single hard drives.
View 2 Replies View RelatedI have a new home server I built this weekend with 4 x 320 GB Seagate Barracuda SATA drives. I am going to load my O.S. this week however I don't have a RAID controller so I would like to utilize 'Software RAID' via 'mdadm' package. My question is since this is a general home server with no specific function rather than hold my data reliabily and resonably fast, how do you guys recommend I configure my partitions for RAID? What level would be best with my 4 drive configuration? RAID5 or RAID10? Should I use a 3 drive RAID and use the 4th as a spare? Please let me know what you recommend as I don't have a lot of expertise with what is not practical or useless when it comes to Mdadm RAID.
View 6 Replies View RelatedI have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.