Ubuntu Servers :: Alternatives To Soft And Hardware Raid 1
Sep 17, 2010
I am trying to make a NAS solution for our club, which should contain different data for download. I have been looking at freeNAS, openfiler and other solutions but they don't offer quite what i want.I want data security, but trying to avoid software and hardware raid.Windows Home Server has a function, where it copies the data from the 'main' storage disk and over to the 'backup' data storage drive automatically.
I would like to build a NAS from PC (D510MO) running Debian. I have two HDDs (one 3.5 1T and one 2.5 500G). On 3.5 HDD I have already two partitions 100M+40G dedicated for Win7-64. Now, I want to install Debian (second OS) on this PC and to have some kind of soft RAID or disk mirror of 500G space. I am planning to create a third partition on 3.5 HDD of 500G (identical as 2.5 HDD size) in order to have a mirror 500G space.
Please send my some suggestions on where I have to install Debian; on 500G 2.5HDD or 500G 3.5HDD!Will Debian boot from both HDDs 3.5 or 2.5 after I create the mirror? What Linux soft I have to use for mirroring (mdadm)?
I've a database-server (IBM x3650 M2) with about 3 TB Data on SAN (Hitachi) with lvm top of softraid (RAID1) based on multipath (2 SAN-boxes in different buildings). After booting the server, multipath starts, but no md starts the mirror. The same configuration with SLES 10 works.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have three WD 1.5 GB harddrives. 2 of them already in a linear RAID also called Concatenated i think. (the same as JBOD). Can i add the third drive to the RAID without losing data? Update "Using mdadm software raid."
I am attempting to run Ubuntu 10.04 Server on RAID 1 and am consistently hitting the same issue when trying to boot the system for the first time (after what seems to be a successful install of the OS). I am creating the RAID 1 using the directions found in the Ubuntu Server Guide. The error I receive when trying to boot the system for the first time is...
Code:
mount: mounting /dev/disk/by-uuid/f35415ee-4c14-4eb1-995f-f19fbcd760c7 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory
I've had this server for a while and I've periodically received similar errors, but it's been worse lately. I'm trying to diagnose and similar posts have left me with 2, maybe 3, ideas about what could be wrong. Here is the setup: Ubuntu Server 10.10 maverick kernel 2.6.35-30-server Sans Digital TR8 8 bay raid - 2 port multipliers to esata outputs 8 1TB WD Caviar Black Syba PCI-e card with Sil3124 chipset - 2 external esata inputs I do not use the RocketRaid 622 card that came w/the enclosure because I had problems with drivers and configuration so I went with the SI chipset. The raid is configured with mdadm, level 10, running ext3 file system:
What's odd is that the raid dropped out and completely locked up the computer multiple times, then marked one of the drives as failed. I did a readd on the drive and it rebuilt witout issue. Ran e2fsck -f to clean up the problems it had when it crashed, but as soon as I do heavy read/writes the errors start showing up again. This is primarily a media server for music and movies, but also for backups and printing. Heavy read/writes are generally transcoding movies and music which is what I was doing when it failed.
I have a problem installing Ubuntu on an HP Proliant ML115. This server has a 1TB mirrored RAID setup (and 1 250GB drive), problem is, Ubuntu apparently doesnt like it, it detects the RAID but fails to partition the hard drives and the installation cant continue. I tried the guided partitioning using the whole disk (detected RAID array). Here is what syslog gives me:
May 17 22:32:30 main-menu[504]: INFO: Menu item 'disk-detect' selected May 17 22:32:30 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 May 17 22:32:30 net/hw-detect.hotplug: Detected hotpluggable network interface lo May 17 22:32:31 hw-detect: Loading PCMCIA bridge driver module: i82365 May 17 22:32:31 hw-detect: FATAL: Module i82365 not found.
[Code]...
PS: Im very noob regarding this kind of stuff so detailed instructions please, also feel free to give configuration tips
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
I've recently completed a fresh install of 10.04 on a home file server and upgraded the hard drives in my storage array. My PREVIOUS hardware was:
Old version of Ubuntu (I forget which one exactly, but I know I had missed a few upgrade cycles)
3X 500 GB Seagate Baracuda's (for the array) Areca 1220 Hardware RAID controller Intel Core 2 Duo 6600 320 GB Seagate for the boot drive
I was running that hardware for about five years or so and it was rock solid. After the upgrade the hardware specs are:
Ubuntu 10.04 Areca 1220 hardware RAID controller 4x 1000GB Samsung Intel Core 2 Duo 6600 320 GB Seagate for the boot drive.
The fresh install of Ubuntu 10.04 went remarkably well. The drivers for that raid controller are in the kernel, which is great. I was able to access the old array after upgrading Ubuntu. Now I am trying to create a new array with the four 1000 GB drives in a RAID 5. Obviously that gives a maximum storage capacity of 3 TB, greater than the 2 TB threshold that seems to be so important. I've been doing some digging and here is where my questions start:
it appears as though gparted doesn't support file partitions greater than 2 TB, correct?
it also doesn't seem as though parted supports ext3 or ext4, is that correct?
If this is the case, how do I create a partition with ext4 that is greater than 2 TB?
I can see the array volume in gparted (which is a relief) but it lists the size as 2.73 TiB, which I find curious because that is over 2 TB, but not the full capacity of the volume. I can also get to the volume in parted. But I see in the parted documentation that using the makepartfs command is discouraged and instead, one should use the command mkpart to create an empty partition, and then use external tools like mke2fs ( to create the filesystem.
how to proceed from here. What does the community think is the best course of action to create a partition of 3TB in ext4? Then I need to change fdisk to automatically mount the array at every log in, right?
I'm setting up a web server but I have no experience with RAID. I would like to try this configuration if possible:
2 x HDD 500GB RAID1 1 x HDD 20GB (logs and tmp)
The old 20GB drive I would like to use it to store logs and temporally files (mounted in /var/log and /tmp respectively). With this I'm trying to reduce some disk usage in the RAID drives. In my idea, it would be better to write the access/error logs of the web server in a separated drive to the one serving the files which may increase speed... sounds crazy?
One problem is that during the installation, If I set the RAID automatically it will try to use my 20GB HDD as well in the RAID... Does it will work if I set the RAID first (removing the 20GB HDD) and then set the mount points in it after the installation?
I'm in the process of setting up my very first ubuntu server. Using 10.04 and has 2gb ram I am using Ubuntu's software raid (mdadm) It will be a file server, with 4 hard drives (3 in RAID5 and 1 as a spare) All 4 drives have 2 partitions: Partition 1 - 100mb Partition 2 - The remaining drive space
I setup the first 3 drives' partition 1's to be RAID1 with the 4th drive's partition 1 as a spare. This is where I'm mounting "/boot" (it's my understanding that /boot cannot be on a RAID5). I setup the first 3 drives' partition 2's to be RAID5 with the 4th drive's partition 2 as a spare. This its where I'm mounting. I believe so far I'm setup correctly to be able to deal with a drive failure and the system should operate just fine. What I don't know what to do is with the /swap. I want to retain the ability to be able to operate with a drive going down. But I have also read warnings about putting /swap on a raid. How would you setup /swap?
I am trying to build a storage server using software RAID. After getting writing RAID information to what I thought was the drives, I realized the configuration was not going to work. Since that time I have modified the RAID configuration more than once plus this is a project that was started a few months back. Between the time delays and currently written configuration, things are a confusing mess.
What I would like to do is wipe out all software RAID and start all over again. The problem is that I am unable to figure out how to get rid of it. I used DBAN to wipe out all four of my 1 TB drives which took a while. But still, the RAID configuration pulls up when I try to do a new install. Where is the RAID configuration data stored at if not on the HD? How can I scrub that information off so I can start all over again?
I'm using a Intel mac pro running Ubuntu Server 10.4 64bit, and I have it working. Currently, I'm just using Ubuntu using bootcamp, and not using EFI. The OS drive is 250gb which I know is more than enough, but it was what I had free at the time. I added 3 1tb drives to the computer, but not sure how to create a raid with them. I've done some searching, but still haven't been able to get it done successfully.
I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).
My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:
Code: (parted) p Model: Areca ARC-1280-VOL#00 (scsi)[code].....
What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.
I am trying to grow my array to make full use of my drives. I have a raid 6 using mdadm on my home server. The array used to consist of 6 1.5TB drives when it was created. Since then I've been replacing the 1.5TB drives with new 2TB drives. And now I have replaced the last 2 drives.
When I added the first 3 disks I did not use the whole disk for the raid partition but rather made it the same size as the 1.5 partitions. As it turns out this may have been a bad idea. (But it gave me another 3 partitions of 500GB that I turned into 1 disk using mhddfs.)
Now I'm trying to grow the array. I've been testing in a virtual enviroment on how to do it but I cannot find another method than this :
1) fail 1 disk. 2) re-partition the disk with the size of the whole disk. 3) re-add the drive as a spare. 4) start the now degraded array to let it resync. 5) wait quite a while. (aprox 5 hours) 6) start again from step 1 for the other disks. 7) use mdadm with grow command to enlarge the array use resize2fs to fill array to max size
Now although since this is a raid 6, I keep some redundancy but I still worry about degrading the array so manny times and rebuilding the whole thing. I mean I read the thing out 3 or 4 times over doing this.
I have installed ubuntu server LTS 10.4 on a HP MediasmartServer.It has four disks. On the first one I have the original HPFS/NTFFS partition (left untouched), a Linux ext4 partition for / a Linux SWAP partition and the rest of this first disk (sda) is part of a RAID 5 with the three other disks.I now have this first ext4 partion shown as full. Is there an easy way to give it more space without loosing the whole setup?I have 2TB of data on this server and everything is so fine.
I've been running my dedicated server on Ubuntu 10.04 for 300 days non-stop so far (touch wood). I'm planning to purchase a dedicated server to host many large sites. The specs are Sandybridge Xeon with 16 bay x 2TB storage using RAID5 (Adaptec RAID 51645).I don't have experience with RAID, but I read that ext4 can only support filesystem up to 16TB, my plan for the system is to have 32TB of storage. How can I make Ubuntu run this configuration?
Also, can Ubuntu 10.04 recognize the Adaptec card? Going through Adaptec website there is no mention of drivers for Ubuntu (although many other distros are available).
I'm looking at building a small NAS setup for my home to hold all my media (DVD's, pictures, music etc..). I'm not too concerned with backups of it at this point as I have all the data elsewhere as it is, but would like one central spot to access it.
A friend of mine turned me on to Ubuntu to do this with but he's never tried it, so I'm looking to see if anyone else has done this or if there issues with it. I've used linux off and on for several years, but this would be my first Ubuntu install.
I'm looking at either using some older hardware I have sitting around, or something cheap like a small dell server (T310 or something). I'd get 4 2TB drives and just use them in a software raid 5 array. It would be accessed with a few different samba shares I'm thinking.
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.
I was having this issue with my server when I tried upgrading (fresh install) to 10.04 from 9.04. But to test it out after going back to 9.04 I installed 10.04 server in a virtual machine and found the same issues. I was using the AMD64 version for everything. In the virtual machine I chose openssh and samba server in the initial configuration. After the install I ran a dist-upgrade and installed mdadm. I then created an fd partition on 3 virtual disks and created a RAID5 array using the following command:
This is the same command I ran on my physical RAID5 quite a while ago which has been working fine ever since. After running a --detail --scan >> mdadm.conf (and changing the metadata from 00.90 to 0.90) I rebooted and found that the array was running with only one drive which was marked as a spare. On the physical server I kept having this issue with one drive which would always be left out when I assembled the array and would work fine after resyncing until I rebooted. After I rebooted the array would show the remaining 6 drives (of 7) as spares.
I updated mdadm to 3.1.1 using a deb from debian experimental and the RAID was working fine afterward. But then the boot problems started again. As soon as I added /dev/md0 to the fstab the system would hang on boot displaying the following before hanging:
I have recently had a problem with my 10.04 server machine. It will not boot, it seems to be taking forever on the loading screen (normally headless server, but I connected monitor when I couldn't ssh), but that's not why I'm here.
Knowing that I do rsync backups every night at midnight of my machine I just bit the bullet and formatted my / partition. Reinstall went fine, I turned off automatic updates (I suspect an update caused the problem) But now I cannot mount my jmicron raid 1, which is where my rsync backup is (doh!).
sudo fdisk -l
Code: WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders
I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.
The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:
Both mirror disks are still functioning and I can still add/delete files, from the server, from other LINUX systems and from other Windows systems via samba.
so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.
I've installed Ubuntu 10.04 LTS (64-bit) server edition on a box here (a few times now), and each time I attempt to boot post-install, I get RAID errors. The box has 4 hard drives in it, and I've disabled the hardware RAID in BIOS because I couldn't seem to get that to work.During the installation, I opted to handle partitioning manually. One set of hard drives has two partitions: one for swap, and one for /. When all is said and done, I end up with three RAID 1 mirrors: one for swap (/dev/md0), one mounted at / (/dev/md1), and one mounted at /srv (/dev/md1).
Based on the reading I've done over the past 48 hours I think I'm in serious trouble here with my RAID 5 array. I got another 1 TB drive and added to my other 3 to increase my space to 3 TB...no problem.
While the array was resyncing...it got to about 40%, I had a power failure. So I'm pretty sure it failed while it was growing the array...not the partition. Next time I booted mdadm didn't even detect the array. I fiddled around trying to get mdadm to recognize my array, but no luck.
I finally got desperate enough to just create the array again...I knew the settings of my and had seen some people have success with this method. When creating it, it asked me if I was sure because the disks appeared to belong to an array already, but I said yes. The problem is when I created it, it created a clean array and this is what I'm left with.
Code: /dev/md0: Version : 00.90 Creation Time : Sun Sep 5 20:01:08 2010 Raid Level : raid5 Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
[Code]....
I tried looking for backup superblock locations using e2fsck and every other tool I could find, but nothing worked. I tried testdisk which says it found my partition on /dev/md0, so I let it create the partition. Now I have a /dev/md0p1, which won't let me mount it either. What's interesting is gparted reports /dev/md0p1 as the old partition size (1.82 TB)...the data has to still be there, right?
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
[code]...
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.