Server :: Creating Backup Disk Image Of RAID 1 Array (MDADM)?

Oct 27, 2010

We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.

View 1 Replies


ADVERTISEMENT

Software :: Creating New Mdadm Raid 1 Array?

Mar 2, 2011

a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.

Code:
[root@kilchis etc]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

View 12 Replies View Related

General :: MDADM Error: Creating A RAID 1 RAM Disk?

Apr 7, 2011

I am trying to create a Raid 1 ram disk. Below are the commands I used:

[root@abidbodal dev]# mke2fs -m 0 /dev/ram8
[root@abidbodal dev]# mount /dev/ram8 /mnt/rd8
[root@abidbodal dev]# mke2fs -m 0 /dev/ram9

[code]....

View 3 Replies View Related

Server :: Mdadm Acting Oddly With RAID 5 Array?

Dec 21, 2010

I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.

By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.

Device List(original):
/dev/sda => boot drive, /home /
/dev/sdb => 1.5TB media storage, failing

[code]...

View 1 Replies View Related

Server :: Mdadm Create,, Raid Array Is Not Clean?

Nov 16, 2009

mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.

View 2 Replies View Related

Server :: MDADM Raid 5 Array - OS Drive Failure?

Jun 7, 2011

I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error

mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:

root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90

[code]...

View 11 Replies View Related

Server :: Raid Array Metadata Info (mdadm)?

Feb 3, 2011

When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.

# cat /etc/mdadm.conf
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/loop[012345]

[code]...

View 2 Replies View Related

Server :: Raid 1 - Resync The Drives In The Array Hda Primary And Hdc Secondary Using Mdadm?

Nov 30, 2010

I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm

View 1 Replies View Related

Server :: Raid 10 Not Assembling Mdadm Assembled From 2 Drives - Not Enough To Start The Array?

Feb 20, 2011

This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome

sudo mdadm --examine --scan -v
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=91c36708:a7cbb532:5b51dc92:ba008491
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1,/dev/sda1

[code]...

View 1 Replies View Related

General :: Bad Sectors On Mdadm Raid 5 Array?

Aug 14, 2010

I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.

View 4 Replies View Related

Software :: RAID Mdadm Cant Add Disks To Array?

Sep 10, 2010

I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.

Code:
aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.

[code]....

View 4 Replies View Related

CentOS 5 :: Software RAID - Starting Array With Mdadm

Jul 15, 2010

I've been having troubles with software raid. In particular, the raid array becomes un "assembleable" after reboots. The config is CentOS 5, 4 sata discs (one by 160 containing OS, no raid and 3 2TB disks configured as a RAID 5 array - no spare drive). These drives were configured in anaconda and all seemed to go well (the drive and its lvm partitions worked and it finished rebuilding overnight). A couple of reboots later the drives cannot be assembled anymore and the machine won't boot. The error message says:

mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.

Of course there are 3 drives and no spares in the array as configured. Manually starting the array with mdadm --assemble --scan gives the same message as does assembling the drive by specifying the individual parts. /proc/mdstat does recognize the 3 drives and when I look at the partition tables in fdisk, they show as being software raid. What could be wrong or steps to diagnose? I tried configuring the raid drives manually before going the anaconda route. Also, does anyone know I can edit the /etc/fstab file to disable them so the machine will at least boot. The (Repair filesystem) shell has the / drive mounted r/o.

View 7 Replies View Related

Ubuntu :: Slow Write Speeds On Mdadm RAID 5 Array

Mar 3, 2010

I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.

View 9 Replies View Related

Ubuntu Servers :: Mdadm RAID 6 Array With Si 3132 SATA Controller ?

Mar 12, 2010

I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.

Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.

The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.

Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).

The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.

The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.

The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.

Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?

I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.

View 2 Replies View Related

Software :: RAID 5 Array Not Assembling All 3 Devices On Boot Using MDADM - One Is Degraded

Aug 31, 2010

I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:

[Code]....

View 11 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

Software :: Mdadm - Raid0 Array Appear With Only One Disk

Dec 1, 2010

When I set up Ubuntu 10.10 I had only one hdd around so I installed my system with the idea that I will add the 2nd hdd for raid1 later on. Last weekend I wanted to add the hdd, but discovered, that ubuntu created a raid0 array. So I went on and tried different things: removing the 1st hdd from the raid0 array, create a raid1 with two disks, and so on... I finally could syncronize both disks but after a reboot the raid0 array appeared again with only one disk. Now I know, I should have written the mdadm.conf and fstab files... My last tries resulted in a missing superblock. Here is the story:

[Code].....

View 1 Replies View Related

Ubuntu Servers :: Mdadm - Why /dev/sdb1 And /dev/sdi1 Show As Both Ext2fs And Also As Part Of A RAID Array

May 31, 2011

I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:

[Code]....

My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?

View 3 Replies View Related

Software :: Mdadm 2.6.4 - Adding Disk To Running Linear Array

Jan 31, 2010

I am using mdadm 2.6.4 for managing RAIDs on Linux kernel 2.6.18. I've a query like whenever I tried to add a new disk to a running linear array(JBOD)i get a message "cannot add new disk to this array".

The exact steps are as follows:
create a new array as:
mdadm -C /dev/md0 -llinear -n2 /dev/sata/ /dev/sata2
It is getting added and i am able to see with -D command.

Now add a new disk sata3 as follows:
mdadm --grow /dev/md0 --add /dev/sata3 I get the output as:
md: sdb has invalid sb, not importing!
md: md_import_device returned -22
mdadm: cannot add new disk to this array.

So my first doubt is whether mdadm 2.6.4 supports this features or not if it supports then do I need to change the driver?

View 3 Replies View Related

General :: Backup Daemons Or Mdadm RAID Across Internal And External HDDs?

Aug 15, 2010

I'm building a new desktop computer, on which I plan to install Debian Squeeze. I'll have a 1 TB SATA hard drive in the system. I'm also considering using two 500 GB external USB drives, but I'm debating about how I want to use them. Running them all separately for 2 TB of space could be a nightmare, with three potential points of failure, so I was thinking of using the two external drives as a backup system instead.

I'm considering linking the two external drives in a RAID 0 array, then linking that array and the internal drive in a RAID 1 array. I would use mdadm software RAID for all of this so I could use individual partitions in the arrays, avoid hardware dependency, and have greater software control. So now is this feasible to do (having a partial RAID 0+1 setup)? Moreover, what kind of performance could I expect from using potentially slow external drives (one of which I know has a very long spin-up time after idle periods) in a mirroring setup with the internal drive?Would I be far better off using a filesystem backup daemon instead?

EDIT:After some more research and brainstorming, I've decided I might just end up using rsync+cron, lsyncd, or DRBD (assuming it can easily make backups locally). I'd probably have to link up the external drives in RAID 0 (or use some filesystem link trickery). But I suppose such a setup would offer greater control, flexibility in disk capacities (the full system isn't so strictly limited to the capacity of the smallest member of the array), and granularity than RAID 0+1 would.I'm still open to thoughts on the mdadm RAID 0+1 solution, but does anyone have any advice on choosing backup software? For some background on my needs, I'll be using this computer as both an everyday desktop and a personal LAMP server (MySQL database files would be included in the backups).

View 6 Replies View Related

Ubuntu :: Mdadm Raid 5 Single Disk Failure

Feb 2, 2010

Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:

Code:

mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted

[code]....

I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.

View 3 Replies View Related

Ubuntu :: MDADM RAID 5 Disk Failure And Recovery?

Jun 18, 2010

I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:

Code:
/dev/sda1
/dev/sdb1
/dev/sdd1
/dev/md0 -

the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks

/dev/sdc (OS disk)
/dev/sde (new 2tb disk - unused)
/dev/sdf (new 2tb disk - unused)

My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.

View 3 Replies View Related

Ubuntu Installation :: Migrate Working Single Disk System To Existing RAID Array Using Disk UUIDs

Aug 1, 2010

I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.

I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).

These are the commands I used:

Quote:

p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*

[Code]....

I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...

So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.

It's still trying to use /dev/disk/by-uuid/412d... to boot.

What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.

View 2 Replies View Related

Software :: Rebuild/repair Array(Raid 1) With Only "mdadm" Command Slack12.2?

Mar 18, 2010

I wonder how to attach new sata hard disk to software array where are two disk and one is crashed (this is a mirroring mode=Raid 1).Situation like this:I unpluged crashed disk and I buy the similar one and plug in What Next should I do?

View 4 Replies View Related

Debian Installation :: Grub Rescue - Will Not Boot From Mdadm RAID - No Such Disk

Sep 19, 2014

I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.

Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".

Output from mdadm:
Code: Select all    # mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Wed Nov  7 17:06:02 2012
         Raid Level : raid6
         Array Size : 35160446976 (33531.62 GiB 36004.30 GB)
      Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB)
       Raid Devices : 14

[Code] ....

Output from blkid:
Code: Select all    # blkid
    /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"

[Code] ....

The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.

**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....

During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.

I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.

I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.

[URL] ....

One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.

This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.

I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.

View 14 Replies View Related

Fedora :: Live Disk Creation & Disk Image Backup

Aug 6, 2009

I need little help on live disk creation and disk image backup.

Can I create live disk using my hard drive installation? If yes then, can I restore the fedora from the live disk to the hard drive. I mean to say that from that live disk can I install fedora again in my hard drive.

Second question is, if I create the disk image of my hard drive( including ntfs & FAT32 partition) , can I restore it in a blank drive. If so , then can os will be restored also?

View 7 Replies View Related

Server :: Mdadm Array Migration Between Hardware Upgrades?

Jul 25, 2011

Long time lurker, still a Linux noob but Im learning I currently have a home media server setup with the following hardware specs:

MSI P45 motherboard
Intel Core2Quad Q6600
8GB DDR2 RAM
2x 250GB WD HDD in RAID1 via LVM (boot/swap etc)
8x2TB Hitachi HDD in RAID5 via mdadm (media/data)

The server mainly serves files for HTPCs around the house and runs a few VMs with VMWare server. I have recently picked up the following hardware which I�m thinking about upgrading to:

Gigabyte EX-58-Extreme motherboard
Intel i7 920
12GB DDR3 RAM

My main concern is will I be able to just swap the driving into the new system and everything can just pick up where it left off? More specifically, will mdadm be able to detect the 8x2TB drives attached to the new hardware and re-assemble the array?

My buddy that helped me set this system up isnt sure so I figured I ask here first, the boards do have the same ICH10R southbridge providing 6 of the SATA ports and 2 more will be run off of the extra controller onboard. I dont have a lot of Linux experience switching out core parts but in Windows Ive had great success moving things between various Intel chipsets and architectures from P965 -> P35 -> P45 -> H55 -> X58.

View 1 Replies View Related

Debian Configuration :: Use A Whole Disk Or A Partition In RAID Array?

Aug 31, 2010

concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
vs.
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

What's the advantage of using one over the other?

View 3 Replies View Related

Ubuntu Installation :: Raid 0 - Two Hard Disk Array

Jul 8, 2010

What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?

View 1 Replies View Related

Ubuntu :: Rebuild Md RAID Array After OS Disk Failure?

Dec 19, 2010

I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved