I'm renting a dedicated server with a company that claims that the server has 2 hard drives in a software RAID 1 array, but I need to make sure that the server really has the 2 HDD, and the size of the 2nd drive... how to do that ?? system is Centos 5.3
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I'm about to install Ubuntu on two 250-gigabyte hard drives in a RAID 1 array, but I'm confused about how to partition my hard drives. How much space should I give to each partition? How many partitions should I create and where should I mount them? (I should mention that Ubuntu will be the only OS on this array.)
So windows wouldn't recognize my drives as a raid setup, so I disabled it and switched to IDE, now Ubuntu 9.10 installation will only recognize my drives as RAID. I have and ASUS M3A32-MVP Deluxe Series motherboard, it has 4 sata connectors and 2 marvel controlled sata connectors. In the 4 sata connectors I have my 2 wd 500gb hds, my dvd burner, and my external usb, esata slots. In the marvel controlled sata connector I have a wd 160gb hd. Originally when I built the computer I wanted a raid setup with the 2 500 gb hds.
But windows wouldn't recognize the raid set-up and wouldn't boot properly. So I said screw it and removed the raid and set all the drives to IDE. Then, when I tried to install Ubuntu 9.04 it would only recognize my 2 500 gb hds as raid. Gparted recognizes the drives as both raid and IDE. Eventually, after a day or two of praying and messing around the installer recognized both drives as raid and IDE. A couple months later here I am trying to install Ultimate Edition 1.4.
I got my system up and running with the Grub installed on my primary hard drive. I have not installed 2 additional drives. I would like to combine the 2 additional drives into a RAID 1 array. I can only find tutorials on how to do this during initial install. I cannot find one that tell me how to do it after the install. Is there a way?
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output: $ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G Create a RAID set with ISW metadata format RAID name: BackupFull6 RAID type: RAID5 RAID size: 5589G (11720982528 blocks) RAID strip: 64k (128 blocks) DISKS: /dev/sde, /dev/sdf, /dev/sdg About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s *** Group superset isw_cdjhcaegij --> Subset name: isw_cdjhcaegij_BackupFull6 size : 3131048448 stride : 128 type : raid5_la status : ok subsets: 0 devs : 3 spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is: Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid
In Zentyal, it showed me active drives, if the array was degraded, and if there was any syncing happening when the array was building. How can I check that without Zentyal? Is there a terminal command or an application I can install to tell me the same information?
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
I have a home samba server with a 3ware Escalade 8506-8. I have 5 x 500 gig hard drives in a RAID 5 array. Recently, my 8506 died and I need to get a new one. However, I saw a 3ware Escalade 9500S-12 on ebay for about $20.00 dollars more than a replacement 8506-8.
My question is, if I put my drives on the 9500S, will it recognize my existing RAID array? Or will it want to build a new RAID array and format all of my data?Hope I have asked this question clearly, little short on sleep this week.
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.
Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.
I have one hard disk for my root partition and a disk array on a separate mount point. I rebuilt my disk array, but I didn't delete my original mount points beforehand because I was hoping it would just "pick up". So now when I boot up, the OS tells me that the filesytem check fails because it can't find the array to map to the mount point. I know that I need to edit my /etc/fstab and remove the line that defines my mount point on the disk array. But it appears to be read only filesystem when I am in repair mode. I can't force the write with vi.
I am currently trying to configure a set of hard drives as a RAID configuration. My system is running with Red Hat Enterprise Linux Client release 5.1 as the base OS. I am booting from CD. I am trying to image a set of drives that have not been imaged before. When the GUI dialog window for disk setup is displayed, it shows a default disk layout including a LVM slice. In the disk layout is a /boot partition already. It is not what I would like so I edit it to be the size for my system and make it the primary partition. I also select it to be a software RAID. I then add three more partitions for my drive 'A' all of type software RAID and NOT primary partitions.
At this point my drives have the correct number of partitions except for showing the LVM slice. I select 'RAID' again, followed by selecting 'Clone a drive to create a RAID device ...' followed by 'OK'. I then get a dialog to select the source and target. i select my drive 'A' to be the source and 'B' to be the target followed by 'OK'. An error dialog is received stating that all the partitions are not of type software RAID. The disk partitions are all type software RAID except the extended LVM slice. I can not get past this point and I am following a procedure written some time ago by a person that is not available.
I have the newest Ubuntu installed on my machine. It currently has a 160GB sata drive. I just bought two shiny new 2TB drives that I want to RAID as 4TB. How do I go about adding these two hard drives even though install is complete? I want the 4TB assigned to my /home directory.
I just upgraded to 10.04 and it went very smoothly. Only problem is that this version now tries to check a couple of hard drives that are external and not attached to the system. They were set up some time ago and the boot will not proceed unless I manually enter "S" to skip. I have removed folders for these disks that were in /media/... but that didn't solve the problem.
I have three 640GB sata hard drives that I would like to put into a raid 5 configuration. I would like to opt for a software raid 5 so its hardware independent. I was trying to follow these instructions, but they seem a bit dated.
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
[Code]....
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
[Code]....
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
I have a home server running Openfiler 2.3 x64 with 4x1.5TB software RAID 5 array (more details on the hardware and OS later). All was working well for two years until several weeks ago, the array failed with two faulty disks at the same time. Well, those thing could happen, especially if one is using desktop-grade disks instead of enterprise-grade ones (way too expensive for a home server). Since is was most likely a false positive, I've reassembled the array:
Code:
# mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 mdadm: forcing event count in /dev/sdb1(0) from 110 upto 122 mdadm: forcing event count in /dev/sdc1(1) from 110 upto 122
[code]....
Right. Once is just a coincident but twice in such a sort period of time means that something is wrong. I've reassembled the array and again, all the files were intact. But now was the time to think seriously about backing up my array, so I've ordered a 2TB external disk and in the meantime kept the server off. When I got the external drive, I hooked it up to my Windows desktop, turned on the server and started copying the files. After about 10 minutes two drives failed again. I've reassembled, rebooted and started copying again, but after a few MBs, the copy process reported a problem - the files were unavailable. A few retried and the process resumed, but a few MBs later it had to stop again, for the same reason. Several more stops like those and two disks failed again. Looking at the /var/log/messages file, I found a lot of error like these:
Quote:
Apr 12 22:44:02 NAS kernel: [77047.467686] ata1.00: configured for UDMA/33 Apr 12 22:44:02 NAS kernel: [77047.523714] ata1.01: configured for UDMA/133 Apr 12 22:44:02 NAS kernel: [77047.523727] ata1: EH complete
[code]....
The motherboard is Gigabyte GA-G31M-ES2L based on Intel's G31 chipset, the 4 disks are Seagate 7200.11 (with a version of a firmware that doesn't cause frequent data corruption).
I have a SATA drive that worked fine. Then I installed two more hard drives into my system. When these hard drives are installed, if I try to access the SATA drive in Linux, it will start lightly clicking and then the drive will become unavailable. If I power on the machine without the other two hard drives then it works fine. What could be causing this to happen? I don't think it's heat because the two hard drives are far away from the SATA drive.
I'm not entirely a newbie, but this seems like such a simple question I'm not sure where else to ask it. I checked through the various HOWTOs and searched already and didn't find a clear answer, and I want to know for sure before we start investing in hardware. Is is possible to create a RAID1 (mirroring only) array with 3 live drives, rather than with 2 live plus a spare? Our goal is to have 3 drives in a hot-swap bay, and be able to pull and replace one drive periodically as a full backup. If I do:
I'm currently using Windows Vista 32-bit on a RAID 1 array; I'm using the RAID provided by my motherboard so it's fakeRAID. Anyway, I'd like to do some C development under Linux but I'm not exactly sure how to go about installing it on a software RAID 1 array without messing up Windows. I'm not sure which Linux distro I'm going to install, so I'm hoping that information isn't important. Would I just resize my Windows partition and put Linux on the newly created partition? Do I have to worry about where Linux will put its bootloader or will it manage that on its own? I didn't mean software RAID, I meant fakeRAID.
We ran out of space on our server hard drive, so I installed 2 x 1GB drives, set them up as a software RAID1 array, copied the contents of /home to it, mounted it as /home for testing. Everything OK, so I unmounted it, deleted the contents of the /home folders (don't worry, we're backed up), then remounted the array. Everything was fine until we rebooted. Now I can't access the array at all; during booting the error "mount: special device /dev/md1 does not exist" comes up twice, and manually trying toe same issue. The relevant line from fstab reads:
/dev/md1 /home ext3 defaults 0 0
However, using webmin shows only md0, the RAID0 device on which the OSD was originally installed. There is no /dev/md1 device file. The mdadm.conf file reads as follows:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid0 num-devices=2 uuid=76fd4050:fb820568:c9bd3a59:ad3e70b0
So it's not listed; I'm assuming this is significant. Am I right, and whether I am or not, what can I do?
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.