I have the latest ubuntu V10 and trying to set up a raid 5 to use as storage. I have 3 1 TB drives along with the 160 GB OS drive. Is what I want to do possible and is there a gui interface to perform this or clear instructions on how to accomplish this? I am a novice when it comes to Linux but trying to ween myself off of Microsoft.
I've been all afternoon trying to install Ubuntu Lucid on my fakeRAID 0 configured (2) HDDs and am unable to set GRUB up. The fake RAID setup is provided by Intel Matrix Storage Manager, it is correctly enabled and the BIOS is also correctly set up -- in fact, I've managed to install Windows 7 with no significant hitch. After struggling with partioning the drives (had to follow advice I found on a very helpful guide online [0]), creating the filesystems AND getting Ubuntu's installer to actually do what it is supposed to do, I now cannot seem to set GRUB up. My system, as it stands, is unbootable at all; via live CD only.
This is how the RAID0 dev is partitioned: Code: # fdisk -l /dev/mapper/isw_ecdeiihbfi_Volume0 Disk /dev/mapper/isw_ecdeiihbfi_Volume0: 1000.2 GB, 1000210694144 bytes 255 heads, 63 sectors/track, 121602 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 131072 bytes / 262144 bytes Disk identifier: 0x6634b2b5 .....
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I am trying to get raid 5 set up as storage. I have karmic with myth set up on primary drive and wish to just use raid 5 for storage with jfs filesystem.
I got a motherboard asus m2a-vm that has support for raid 0, 1 and 10 and I was just curios if anybody has used a fakeraid for raid 0 with ubuntu. If so did it work out as planed?
I have 2 drives and wish to use the following partition setup. sda1 /boot 1GB ext4 sda2 / 50GB ext4 raid 0 sdb1 / 50GB ext4 raid 0
Unfortunately only Ubuntu server has the option to make a raid in the install. Can somebody point me to a howto on something like this up. I'm thinking I will want to install onto a sdb2 set up the raid and copy the file system to the raid.
I am in a situation where I am stuck with a LVM cleanup process. Although I know a lot about AIX LVM , but this is first time I am working with Linux LVM2. Problem is that I created two RAID arrays on storage, which appeared as mpath0 & mpath1 devices (multipath) on RHEL. I created logical volumes and volume groups and every thing was fine till I decided to clean the storage arrays and ran following script:
#!/bin/sh cat /scripts/numbers | while read numbers do lvremove -f /dev/vg$numbers/lv_vg$numbers vgremove -f vg$numbers pvremove -f /dev/mapper/mpath$numbersp1 done
Please note that numbers was a file in same directory, having numbers 1 and 2 in separate line. Scripts worked well and i was able to delete definitions properly (however I now think I missed one parted command to remove the partition definition from mpath device. When I created three new arrays, I got devices from mpath2 to mpath5 on linux and then I created vg0 to vg2. By mistake, I ran above script again for cleanup purpose and now I got following error message
Cant remove physical volume /dev/mapper/mpath2p1 of volume group vg0 without ff[/B]
Now after doing mind search, I now realize that I have messed up (particularly because mpath devices did not map in sequence to vg devices and mapping was like mpath2 --- to ---- vg0 and onwards). Now how I can cleanup the lvm definitions? should i go for pvremove -ff flag or investigate further? I am not concerned about data, I just want to cleanup these pv/vg/lv/mpath definations so that lvm can be cleaned up properly and I can start over with new raid arrays from storage?
I am interested in turning my home server into something that I can store backups on. I do photography and therefore have a lot of photos. I use Mac OS X for my photo editing, so it must be accessible from my Macbook. I am new when it comes network storage servers, so what would be the best solution for me to be able to backup my photos seamlessly? I would like it easy enough so others can backup files without any terminal commands and such. What would you suggest? CIFS? RAID? iSCSI?
I was wondering what is the proper way to setup a hardware based mirrored raid. I have two 2TB drives and a nvidia based raid on the motherboard. I used the nvidia raid manager to setup a Mirrored array consisting of those two drives. The total shows as 1.81TB array.
I boot into OpenSuSe 11.3 and in the partitioner I see two drives (dev/sda and dev/sdb each 1.82TB) listed instead of a single RAID drive. Am I doing something incorrectly that two drives show up instead of the array? Does something need to be enabled?
I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right? Hardware: Intel ICH10R First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.
Step 1: Create RAID Device Code: mdadm --create --verbose /dev/md0 --metadata 1.2 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1 I read that "--metadata 1.2" is the best option. It is true? Create filesystem on the RAID device
Using this method of calculation: * chunk size = 128kB (for RAID 5) * block size = 4kB (recommended for large files, and most of time) * stride = chunk / block = 128kB / 4k = 32kB * stripe-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (5)- 1 ) = 32kB * 4 = 128kb Then: Code: mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=128 /dev/md0
Step 2: Add spare-disk Code: mdadm --add /dev/md0 /dev/sdd1 Is this enough?
I setup a HTPC about a month ago, and just expanded my storage by adding two 750GB drives in addition to my OS drive. I am using Ubuntu 9.10 as my OS and need help setting up a raid 1 on the two 750GB drives.
gparted shows the two drives as /dev/sdb and /dev/sdc
I recently bought a new system that has an Intel Matrix Storage Manager "RAID controller" (ICH10R/DO) on it. I'm a bit baffled over what this really is. I see at [URL] that this controller is supported by the Linux dmraid and mdadm commands and supported in the 2.6 kernel version for quite a while. This looks as though it is some sort of convergence of a brain-dead hardware chipset that requires software installed in the OS to manage it. Kind of reminds me of the wimpy Windows modems of the past. How I deployed. I set up two Seagate ST31500541AS disks as a mirrored pair in the hardware controller interface (CTL-I setup after POST).
I installed Fedora 12 in the usual fashion, though was confusing considering I expected a single "RAID device" to be seen by Anaconda. I went ahead and set up the two native /dev/sda and /dev/sdb as a mirrored RAID device on installation (mdadm under the cover). Recently, Palimpsest is insisting that I have a disk problem with "Disk has many bad sectors" error on /dev/sdb. When I run "dmraid -s" it tells me that the meta device is OK and I see no hardware errors in the messages log. I'm not having kernel panics as others seem to have had on RHEL 5.x.
Greetings Fellow Knights Of The Penguin Clan....!I am having issues with device mapper seeing a 275GB Raid5 Lun from my SAN storage.I'm using IBM 2145 San Volume Controller.I am able to see a 40 GB Raid 10 device though..
So I didn't notice when I setup my CentOS 5.5 server that I left / as RAID 0 on md1. All the rest are RAID 1. Is there a way I can modify the array to RAID 1 without a risk of data loss? I'm glad I caught this before I setup any other services. I've only setup smb so far...
[root@ftpserver ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 16G 3.0G 13G 20% /
I have Ubuntu 10.04 and MegaRAID controller. The only tool I have is the notorious MegaCli. I need to be emailed when some disk has failed in the RAID array. How to set that up?
The intention is to have this system dual-boot. When i first put it together, i decided to setup a raid5 array spanning 3 sata drives. I installed Windows 7 first, decided i'd get to Linux later. I left 150mb or so at the beginning of the array for /boot, and about 200gb at the end for my linux install. i'm getting to the linux install. My distro of choice is Fedora 12. I start the setup, and at the point where it's time to partition, the installer tells me that its unable to find any suitable storage devices.
I Crtl-Alt-F2 to a console, and fdisk -l. Fdisk reports three individual drives which all have partitions already. All have free space. None make sense. So i turned to google, and found some threads which explain that this chip doesn't run a true raid, rather its what's been referred to as fake raid. Which is that it depends on the windows driver in order to actually present the array to the OS, and that the best way to get by that on linux, is to break the array, and use LVM instead.
That's all well and good, but i lose two things in doing that. First i lose the resiliency of raid 5, and second, well, what does that do to my windows install? I've considered moving all of my data from windows to other machines, and then just starting from scratch, but i'd really much prefer a method of using the chips fake raid in linux. Is there a driver, or module which i can install to make this happen?
I just want to ask if anyone knows how to setup SAN storage on Red Hat Enterprise. I am quite familiar on doing this in AIX with IBM DS8000 but not sure on how to do in RH. Does RHE have volume management the same as AIX or do you need to install a third party Veritas Volume Manager. I know you will ask me to search in google but I am writing so that I will be pointed to the right direction.
I have collected a number of computers over the years, and now I would like to put them to good use. I considered UEC, but many do not support hardware virtualization and all I really need is storage. Over all the machines, I estimate that I have 4-5 terabytes of storage, all going to waste because each one has relatively low storage space. Is there any way I could setup a redundant storage solution that utilities these machines in a networked system?
I have a Fedora NAS (not built by me) there is a 1 Tera of storage 2x 500 set at raid 0. The server runs Fedora 7/8 I believe which, runs flawlessly. Sadly, I am running out of storage space and would like to add more. Can I just put another drive in the box? If so, what do I need to do and what impact does it have on the existing raid 0 setup. Ideally I would like to put two additional in but I am not confident how to configure them. Is this just a case of mapping them in Windows 7.
I have set up a RAID1 array and am trying to test if its is set up correctly/if errors are detected, reported and recoverable.
Started up the mdadm monitor with:
Code:
I set the RAID array to a faulty state by doing:
Code:
However I do not get any problem reports to my e-mail address. When I test the mdadm I get this result:
Code:
When I look in the postfix folder, sure enough.. there is no main.cf file there... but there IS a file named 'master.cf'. I am running Ubunto 9.10 with default components - have postfix but no sendmail.
I have a situation where I need to setup some sort of storage solution with Raid 5 redundancy. I was thinking that Linux would be the way to go but I am not certain what platform would be best.
I was thinking running two SATA RAID controllers to get me somewhere between 4 - 6 TBs in Raid 5. I am very comfortable with ubuntu now and would love to use it. I have also used FreeNas in the past but would love to have a full OS on the machine if at all possible.
I'm interested in buying a new hardware for my company. The old server (now 10 years old) should be replaced with a new one. Till now, I was looking on different hardware suppliers, boards and different other places. I found a Tyan board [URL]. The hardware spec is quite interesting and the board would fullfill our claims.
how both storage devices will be supported by Ubuntu or Debian??
I'm setting up a web server but I have no experience with RAID. I would like to try this configuration if possible:
2 x HDD 500GB RAID1 1 x HDD 20GB (logs and tmp)
The old 20GB drive I would like to use it to store logs and temporally files (mounted in /var/log and /tmp respectively). With this I'm trying to reduce some disk usage in the RAID drives. In my idea, it would be better to write the access/error logs of the web server in a separated drive to the one serving the files which may increase speed... sounds crazy?
One problem is that during the installation, If I set the RAID automatically it will try to use my 20GB HDD as well in the RAID... Does it will work if I set the RAID first (removing the 20GB HDD) and then set the mount points in it after the installation?
I've just finished setting up a RAID 1 on my system. Everything seems to be okay, but I have a very slow boot time. It takes about three minutes between the time I select Ubuntu from GRUB and the time I get to the login screen.
I found this really neat program called bootchart which graphically displays your boot process.
This is my first boot (after installing bootchart). I'm not an expert at reading these, but it appears there are two things holding up the boot, cdrom_id and md_0_resync. I tried unplugging my CD drive SATA cable, and this is the new boot image.
It's faster, but it still takes about a minute, which seems pretty slow on this system. The md0 RAID device is my main filesystem. Is it true that it needs to get resynced on each boot?
I'm not sure how to diagnose my CD drive issue. The model is a NEC ND-3550A DVD RW drive. I should also note that there's a quick error message at startup about the CD rom. It's too quick for me to read it, just one line on a black screen saying "error: cdrom something something".
I'm trying to setup a RAID 5 array of 3x2TB drives and noticed that, besides having a faulty drive listed, I keep getting what looks like two separate arrays defined. I've setup the array using the following : sudo mdadm --create /dev/md01 --verbose --chunk=64 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sde
So I've defined it as md01, or so I think. However, looking in the Disk Utility the array is listed as md1 (degraded) instead. Sure enough I get :cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde[3](F) sdc[1] sdb[0] 3907028992 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
So I tried getting info from mdadm on both md01 and md1 :user@al9000:~$ sudo mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Sun Jan 9 10:51:21 2011 Raid Level : raid5 ......
Is this normal? I've tried using mdadm to --stop then --remove both arrays and then start from scratch but I end up in the same place. I'm just getting my feet wet with this so perhaps I'm missing some fundamentals here. I think the drive fault is a separate issue, strange since the Disk Utility says the drive is healthy and I'm running the self test now. Perhaps a bad cable is my next check...
Before i set up the raid, but with this exact partitioning, the system booted perfectly. When i installed mdadm and created the raid1 mirroring on sda6 and sdb1, the init got screwed up, and all i get is a shell on initramfs, from where i can inspect that sda is binded on md, and cat /proc/mdstat tells me that i have an inactive sda[4].I can't mount the root partition (sda2), because it's busy (i suspect dmraid to lock it), which is, i guess, why init cannot be found.
I wonder if my error is to setup a raid array using a logical partition contained in an extended partition (but i hardly see why it would not work - but the sda bind and the sda[4] in mdstat seems to tell me that it does not), or it's just the initrd that is improperly configured. The other things that bothers me, is that changing the partition type of the raid partitions (fd to 0 - Empty), to disable raid autodetection, resulted in the same behavior on boot. Which might lead me again to think about configuration file problem instead of improper setup.The live cd doesn't not seem to recognize raid, so i can't inspect problems any further, but i could inspect system configuration, but i don't really know where to start.
I am just getting into the Raid world with my home server. what i have:
Asus M3A78-CM (may be wrong, cant remember for sure) Motherboard with 6 Sata2 Connectors 3 2TB Sata2 Drives 2GB of DDR2 Ram set in bank A AMD Dual Core (i'll know what it is when i get the system booted)
What i am trying to figure out is when i build this system, I will put in the HDD's into Sata Ports 1-3 and in the BIOS i will setup a RAID 5 Array. Now, do i just format and partition like normal? Would it be better to have a smaller, and better performing Sata2 for the system so i can have the raid be only for file storage?
In what i have read about this, i need to format each drive into two partitions at least but i do not know what needs to be done, The guides just vaguely say something about two partitions and then move on (trick of the trade? keep all of us in the dark? LOL) I would like to have a raid for my storage and a faster disk for the OS and home directories. But if it cannot be done then thats how it is. So do i put the TB drives in Sata Ports 4-6 and my other drive in Sata Port 1?
I looking to setup a CentOS server with RAID 5 i was wondering what the best way to set it up and How with the ability to add more HDD to the RAID system later on if needed?
I'm setting up a backup server using Centos 5.3 and an Adaptec 5805 raid card and discovered that I can't use a raid setup that is over 2TB in size as the boot drive. What I eventually did was set up 2 raids on the same set of 4 drives so that I have a 200Gb 'drive' for booting and a 2.6TB 'Drive' for data. I want to keep the OS in the raid setting so I have some protection instead of having a dedicated stand alone drive for the OS. This will be for a company wide backup server and I want to minimize the possibility of drive failure for the OS as well as the Data.
I was able to install and reboot the system and everything seemed to be working but after some working on it a bit I did a reboot and wound up with a non-booting system. I can boot to the rescue mode with the install dvd and mount the original system and I even tried to reinstall the grub setup per instructions I found on the net but still I get a system that hangs up after it asks if I want to boot from the CD. If I take out the CDROM option from the boot lineup in the bios I stop at the same place minus the boot cd prompt.
I'm guessing it is something to do with one of the raid drives being over 2TB but I'm booting from a 200gb sized raid so I'm really at a loss for what to do next??
Is what I've described the correct way to handle booting up with a large raid or is there another way to reconfigure the drives as one big 2.8TB raid and use something other than grub to boot to it?