Ubuntu Servers :: New Mdadm Version - Worth Upgrading?
Jul 12, 2010
I'm just about to commence a full reinstall of my home media server. Planning on using 1x 1tb and 7x 1.5TB drives in raid 6. I notice the version of mdadm distributed in Ubuntu is 2.6.7.1, but versions exist up to 2.6.9 (excluding all the 3.X ones) Is it worth using a later version? Or is 2.6.7.1 used for a particular reason?
View 2 Replies
ADVERTISEMENT
Oct 10, 2010
I have an ancient computer that's been running Ubuntu since about last year. I started with 8.04.4 but then upgraded to 10.04 via apt-get.I mostly use this machine as a file dump for other computers on the network.
View 2 Replies
View Related
May 31, 2011
All, This is a x-post from the Linuxmint forums. I did this testing on Mint 11 install - but also tried the exact same thing on an Ubuntu 11.04 release and had the exact same output to deal with..Follow along: I'm having trouble with MDADM.... It's not acting like I expect it to.... hopefully someone can enlighten me.
Code:
MintRAID aaron # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
MintRAID aaron # mkfs.ext3 /dev/md0
Added this line to /etc/fstab:[code]...........
I update my fstab with /dev/md127 instead of /dev/md0 and it seems to be working ok.What in the heck is going on? Why all of the naming descrepencies? Can someone help me understand what's going on? I have used the same steps on other (and older) distros and haven't had these troubles. (Ubuntu 9 or 10 something and Cent/RHEL 5.5) I've always named the first array md0, and it's always stayed md0 through reboots, etc etc etc...
View 9 Replies
View Related
Apr 2, 2010
I'm setting up a DIY NAS system, running Ubuntu server from a CF and using 2 SATA drives. I only need RAID1. so that should do. Setting up RAID1 with mdadm is straightforward, and all of my tests with failure/recover scenario work fine in VirtualBox. Most of the tutorials on the net are talking about using mdadm in conjunction with LVM2. What is the reasoning behind LVM2 over mdadm?
View 2 Replies
View Related
Oct 14, 2010
I'm the user operating ubuntu 9.10 server. I made configuration with software mirroring(raid1). when I checked cron, I found the mdadm in cron.d dir. 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all -- quiet checkarray is supposed to be run on the first sunday of every month. so I just want to know
1. what does checkarray do exactly?
2. does it make a stress to system?
3. Is there any problem if I get rid of the script from cron?
View 4 Replies
View Related
Mar 30, 2011
I've got a strange problem. I have the following system:
[Code]...
After doing this install everything works fine as expected. I can reboot, shutdown and bootup as I much as I want to and the system will work. Now, I proceed to do the following (as root obviously - sudo bash)
[Code]...
When I try to restart the system now, I get to the grub boot loader and then it just breaks with the following message I've identified 'mdadm' as being the culprit here. Any idea why this would happen? Just a subnote. The reason I'm installing mdadm is to create a soft-raid as follows with the remaining space on each drive:
[Code]...
View 5 Replies
View Related
May 3, 2010
Created my own file server/nas, but get stuck in a problem after couple of months. I have a server with 4x 1,5tb disks, all connected to sata ports and 1 40gb ata133 disk running ubuntu 9.10 x64 amd. I've created a raid5 array using mdadm. It all worked great for couple of months but lately the raid5 array is degraded. disk sdd1 is faulting every few days. I have checked the drive but it is fine. If I re-add the disk and wait for 6 hours my raid5 array is all fine again, but after a few shutdowns, it is degraded.
my mdadm detail:
Quote:
root@ubuntu: sudo mdadm --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Mon Dec 14 13:00:43 2009
Raid Level : raid5
[Code].....
View 9 Replies
View Related
Jul 9, 2010
I'm setting up Ubuntu 10.04 Server x64 on a Gateway DX4710. I installed on a 500GB SATA, using encrypted LVM, added webmin, and used ufw to configure iptables. All seemed fine.I then set up RAID1 on two 1TB SATAs. Using webmin, I created Linux RAID partitions on sdb and sdc. I then ran ...
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
sudo mkfs.ext3 /dev/md0
sudo mkdir /data
sudo chmod -R 777 /data
sudo mount /dev/md0 /data
All still seemed fine.I could see /data in the webmin filesystem list, and had ca. 1.4TB total local disk space.At that point, I decided that I really wanted an encrypted filesystem on /dev/md0. I also needed to tweak the fan setup. And so I shut down, without adding /dev/md0 to fstab. And it was probably still synching.Now /dev/md0 is semi missing. That is ...
sudo mdadm -D /dev/md0 => doesn't exist
sudo mdadm -E /dev/sdb1 => part of RAID1 with sdc1
sudo mdadm -E /dev/sdc1 => part of RAID1 with sdb1
What do I do now? Can I recover /dev/md0? Is it just that I didn't add it to fstab? Can I just do that now? Or do I need to delete sdb1 and sdc1, and start over?
View 5 Replies
View Related
Aug 11, 2010
intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.
[Code]....
then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...
at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0
View 1 Replies
View Related
Nov 2, 2010
I have ubuntu server 10.04 on a server with 2.8ghz 1gb ddr2 with the os on a 2gb cf card attached to the IDE channel and a software raid5 with 4 x 750gb drives. On a samba share using these drives I am only getting around 5 MB/s connected via wireless N at 216mbps and my router and server both having gigabit ports. Is a raid 5 supposed to be that slow? I was seeing speeds of anywhere from 20-50MB/s from other people and am just wondering what i am doing wrong to be so far below that.
View 4 Replies
View Related
Jul 18, 2011
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
View 2 Replies
View Related
Oct 6, 2010
Can I use UUIDs to setup a raid with mdadm?
View 3 Replies
View Related
Feb 7, 2011
I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?
Code:
sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
[Code]...
View 3 Replies
View Related
Mar 3, 2011
I'm looking for a way to do a bare metal backup of our server using a tool such as ghost or clonezilla. The limitation is that / is on an mdadm raid 5. The only relevant info I could find on clonezilla's site was:
# Software RAID/fake RAID is not supported by default. It's can be done manually only.
View 6 Replies
View Related
May 13, 2011
My fileserver initially had 3 1TB drives in RAID 5 configured with mdadm as /dev/md1. (System root is a mirrored raid on /dev/md0) I went to go add a 4th 1TB drive to /dev/md1 and grow the raid 5 accordingly. I was initially following this guide: [URL] but ran into issues on the 3rd and 4th commands. I've been trying a few things to remedy the issue since, but no luck. The drive seems to have been added to /dev/md1 properly, but I can't get the filesystem to resize to 3TB. I also am not entirely sure how /dev/md1p1 got created, but it appears to be the primary partition on the logical device /dev/md1.
Relevent information:
Code:
fdisk -l /dev/md1
Disk /dev/md1: 3000.6 GB, 3000606523392 bytes
2 heads, 4 sectors/track, 732569952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0xda4939fa .....
The filesystem originated as ext3, I believe its showing up as ext2 in some of these results because I disabled the journal when doing some initial troubleshooting. Not sure what the issue is, but I didn't want to blindly perform operations on the filesystem and risk losing my data.
View 9 Replies
View Related
Aug 4, 2010
I have Ubuntu 10.04 32Bit installed. Is i possible to upgrade to Ubuntu 64Bit without a clean install?
If not, what is the best way to back up configurations and installed packages in order to do a clean install and upgrade to 64bit?
View 5 Replies
View Related
Sep 1, 2011
I am just a fresh starter at Ubuntu so I ask you for some comprehension and to answer my question about a problem I have. I had started upgrading Ubuntu from 10.10 to 11.04, everything went fine until a message appeared saying:
Failed to fetch [URL]...untu1_i386.deb 404 Not Found
Failed to fetch [URL]...untu3_i386.deb 404 Not Found
Failed to fetch [URL]...3.11-1_all.deb 404 Not Found
Failed to fetch [URL]...untu1_i386.deb 404 Not Found
Failed to fetch [URL]..._1.0.3_all.deb 404 Not Found
Failed to fetch [URL]..._1.0.3_all.deb 404 Not Found
Then I had to close the upgrading process because of that message I tried again but it came out with the same result.
View 2 Replies
View Related
Nov 14, 2010
One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.
sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:
Code:
root@d668:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sda2[0]
9767424 blocks [2/2] [UU]
[Code]...
View 3 Replies
View Related
Jan 21, 2011
when I start my raid5, only 2 disks of 3 are active on md0. The 3rd disk is inactive on md_d0.When I do mdadm --examine, the two active disks report 2 active, 2 working, 1 failed. the inactive disk resports 3 active, 3 working, 0 failed.
View 2 Replies
View Related
Feb 5, 2011
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:
Code:
user@guybrush:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc5[3] sdb5[1] sda5[0]
3779096448 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[Code]...
View 9 Replies
View Related
Apr 5, 2011
I bought a disk to a friend that used it in a raid array, using the entire disk for the raid usage. To put that disk on service, i used dd-rescue to copy my old disk entirely, and managed to grow and setup a the partition table without losing any data. My last step was to create a RAID between my entire old disk, with a single partition and a partition of the same size on my new disk. I ran into some problems, but i manage to somehow fix it imperfectly, but now this setup is working properly. The problems (and imperfection) came from an issue it did not suspected : at some point, the original RAID superblock of the new disk, living in /dev/sda, resisted to dd-rescue, and so it is scanned by mdadm that tries, obviously unsuccessfully, to use it.
Partition layout :
Code:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
[code]....
this setup is working properly besides this raid5 declared on sda, so that is shows up here and there. Since it is using the same device name that my other, proper raid setup, i don't know how to deactivate it since mdadm uses the /dev/mdx name to identify arrays.
View 4 Replies
View Related
Jul 18, 2011
Code:
$cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
[code]....
1/4 of my drives died after about 3 years of usage. I replaced it with an identical drive and did a mdadm -add to re-add it to the array. I expected this to take quite a long time, but not more than 1 million minutes to complete!
View 5 Replies
View Related
Sep 1, 2011
I've been using Ubuntu on my fileserver for quite a while now, and I've always really had this problem, but I want to finally address it and get it fixed. At seemingly random points (when my fileserver is under stress - typically while I'm writing lots of data to it), my fileserver will crash. It generally completely crashes, not responding to any further file requests or any of my SSH commands, and must be reset hard (typically by flipping the power switch). After such an occasion, I end up with some corrupted files. It seems to corrupt a large array of files (it's not an isolated issue - for example, it corrupts files that were not being accessed anywhere near the time it crashed, including files that had never been accessed during that period of uptime). The files don't get completely smashed, but they're definitely corrupted (artifacts in images, skips in audio and video files, often complete failure of binary files such as virtual hard drives or disc images).
I'm using Ubuntu Server 11.04, but similar issues to this happened for me in 10.04 LTS (in fact, I upgraded to try to solve them). I'm using mdadm to create an 8-drive raid6 array. The drives are 1.5 TB each, mostly Samsung HD154UI, but with a WD drive in there too (sorry, I can't find the model number at the moment). The hard drives themselves appear to be working fine - SMART reports no issues with any of them, mdadm says they're all up, and I have no reason to believe that the drives are at fault here (although I can conduct further tests if necessary). I've posted about this problem before here and here. In these cases, the issues seemed to be with XFS - in fact, I switched from XFS to ext4 on my RAID array because I simply believed XFS to be unstable. Unfortunately, this issue occurs with ext4 as well, so I'm fairly certain it's an mdadm issue. Here is the output of "cat /proc/mdstat", for those interested:
[Code]....
View 9 Replies
View Related
Mar 20, 2010
Currently i am using Ubuntu 9.04(Jaunty),i have downloaded iso image of Ubuntu 9.10 and i have burned it in a CD..So how to upgrade my ubuntu version to 9.10 without losing existing data.
View 7 Replies
View Related
Oct 3, 2010
The upgrade instructions say you can use the upgrade manager to go from 10.4 to 10.10. I am currently running 9.10. Will this still work? Or do I have to go to 10.4 first? Or can I download the 10.10 alternate install iso and upgrade directly with that for both my Ubuntu and UbuntuStudio installations?
View 2 Replies
View Related
May 25, 2011
I've not really need the computer for much for a while and have let the updates slip! I've got 9.04 and obviously need a newer version. I read that it's best to upgrade to 9.10 first then go from there, but my version is so old all the information is using update manager (just click this button blah blah) but of course this isn't supported now so can't use that method.
I don't really know any other way to upgrade other than following the links in update manager
ps forgot to say my cd coping sometimes messes up, don't know if it's software or hardware so was looking for methods to avoid downloading onto cds then upgrading from that! but will have to try this if no other way
View 9 Replies
View Related
Aug 8, 2011
I have a Dell Mini 10V running Ubuntu Linux 8.04 LTS. I need to upgrade to the latest version of Linux. Which of the latest versions I can upgrade to and exactly which download to use (a reference to the actual download and page it is on) to be able to upgrade. I have looked at the many different versions and types of downloads and have no idea which to choose to be able to use the correct one.
View 8 Replies
View Related
Feb 2, 2010
Something weird happened last night and my raid5 failed. I am trying to re activate it and see if my data is dead or what. When I run mdadm -Asv /dev/md0 I get
Code:
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/dm-1: Device or resource busy
mdadm: /dev/dm-1 has wrong uuid.
mdadm: cannot open device /dev/dm-0: Device or resource busy
mdadm: /dev/dm-0 has wrong uuid.
mdadm: cannot open device /dev/sde2: Device or resource busy
mdadm: /dev/sde2 has wrong uuid.
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: /dev/sde1 has wrong uuid.
mdadm: cannot open device /dev/sde: Device or resource busy
mdadm: /dev/sde has wrong uuid.
mdadm: cannot open device /dev/sdd: Device or resource busy
mdadm: /dev/sdd has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
View 4 Replies
View Related
Mar 12, 2010
I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.
Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.
The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.
Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).
The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.
The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.
The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.
Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?
I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.
View 2 Replies
View Related
Mar 12, 2011
I'm trying to find out which one is safer when it comes down to recovery process in case of a drive failure
A RAID5 created in mdadm
or
a Stripe RAID created on pure LVM
the RAID is purely for data storage for a SAMBA server, the OS will reside on its own drive.Ideally the RAID physical hard drives should be re-build on another machine in case of catastrophic server failure (mother board problem, or any other random problems as an example)I can't decide which of the 2 software RAID method is more convenient and safest, don't care about performance, it'll be a dedicated server for mass storage, it's going to mirror other 3 file servers on fakeRAIDs (dmraid), it's simply a redundant backup for the backups
The important goal here is portability.from what've read it appears that LVM might be more portable?but according to some dated (2009) info the mdadm seems to be a bit buggy when it comes to rebuilding the array, yet LVM doesn't appear that safe either which one would you pick for ease to rebuild on catastrophic failures?
View 2 Replies
View Related