Debian Configuration :: Use A Whole Disk Or A Partition In RAID Array?

Aug 31, 2010

concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
vs.
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

What's the advantage of using one over the other?

View 3 Replies


ADVERTISEMENT

Debian Configuration :: RAID Array Not Accessible

Aug 29, 2015

Just setup with Debian 8 (LXDE) a few weeks ago. Raid10 array was preexisting.

Was working well. After booting I would need to go to the save as then would need to enter the root password and everything would be good.

Can't access the array.

Used to use the command $ mount /dev/dm-o /home/myspace/folder under Debian 7.6 to mount the array (no longer works).
blkd lists a /dev/md0 but instead of UUID it is PTUUID

[Code] .....

View 0 Replies View Related

Debian Configuration :: Reorganizing Disks In MD RAID Array

Mar 4, 2010

I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.

However, mdadm man page states:
"Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."

So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.

RAID CONFIGURATION:
~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 4 17:38:26 2006
Raid Level : raid1
Array Size : 238950720 (227.88 GiB 244.69 GB)

View 3 Replies View Related

Debian Configuration :: Setting Up A RAID 1 FAt32 Array?

Jul 2, 2011

New to linux in general and am having issues on setting up a Raid 1 array for two disks on an HP Proliant Microserver which I am looking to be accessible from my windows PC. I have installed the latest version of debian succesfully on a 250GB disk that came with the server. I have added 2 2TB disks which I would like to have in a RAID 1 array and to have visible from windows to store music/videos etc on. I have managed to partition the two disks to FAT32 (which I think is best) and have managed to configure the array so that it shows as active when I use cat /proc/mdstat. I have been following the steps in this article [URl]... squeeze-p2 and trying to adapt it to my situation.

I am stuck on the step to create the file systems using the mkfs command. I try mkfs.vfat /dev/md0 and it comes up with the error mkfs.vfat: command not found. I have tried mkfs -t vfat /dev/md0 and it give the error "mkfs.vfat: No such file or directory" So my question is how can I continue with the process of setting up the array? Or maybe I should be asking is it possible to set up an array with FAT32 formatted disks?

View 2 Replies View Related

Debian Configuration :: Can't Correctly Install Grub On RAID Array

Nov 26, 2015

I'm having issues with a RAID array.

Setup is like this:

Debian Jessie, 2 hard disks, each having 2 partitions: /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2. Partitions were paired during installation, so they form /dev/md0 and /dev/md1. /dev/md0 is the root (/) partition, /dev/md1 is for /home.

At the end of the install process, I chose /dev/sda1 to carry Grub. And I think this is where I screwed things up.

After removing one of the hard drives, there was no boot capability. So, I installed Grub on /dev/sdb, too.
Now it displays the boot menu but cannot find the kernel. This is where I got lost in the process.

Do I need to reinstall the OS or is there a way to fix it? I suppose I have to edit Grub.

View 0 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

General :: Convert Full-disk RAID5 Array To Partition-based Array?

Dec 23, 2010

I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).

Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?

View 1 Replies View Related

Ubuntu Installation :: Migrate Working Single Disk System To Existing RAID Array Using Disk UUIDs

Aug 1, 2010

I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.

I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).

These are the commands I used:

Quote:

p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*

[Code]....

I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...

So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.

It's still trying to use /dev/disk/by-uuid/412d... to boot.

What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.

View 2 Replies View Related

Debian Configuration :: How To Get A Virtual RAID Image Disk

Feb 20, 2016

How can i get a RAID virtual image disk?

What i need is to mount several directories from any other partiton (or file system) as a new merge file system that can grow or decrease depending on the free space. As if it was a dinamic RAID,so i can work with huge files distributed over the partitions mounted.

Ejemp: /mnt/sda1/dir_raid1 + /home/dir_raid2 + /mnt/sda3/dir_raid3 ---> /mnt/RAID/

mhddfs and unionfs <---- are not the solution im searching (cant use huge files)

View 0 Replies View Related

Debian Configuration :: Raid 5 Recovery After Mistakenly Removing Disk?

Aug 12, 2010

I've got an 8-disk raid-5 setup, and one of the disks failed. I shut the system down, replaced it, and powered the box back on again. Then, I made a catastrophic mistake; I 'failed' and removed the wrong disk (should have been sdj1, and I typed sdk1 by accident). I tried to re-add sdk1 back to the raid array, but it got listed as 'spare'. My raid array is off-line, since I now have 2 disks unavailable.

I know that the data still exists on sdk1, is there any way I can get the raid array to recognise the fact that it's a valid part of the array, and not a spare disk? At least if I can do that, I'll have a degraded but accessible array, and then I can rebuild the array on the properly replaced disk.

View 7 Replies View Related

Debian :: Reverting RAID 1 - Mount Partition As Standalone Encrypted Disk

Feb 11, 2011

I have 2 identical disks originally configured as a pair for a server. Each of the disks has 2 partitions dev/sdb1,dev/sdb2. The sdb1 partitions I had configured as a raid1 mirror. The sdb2 partitions were non-raid and used as extra misc. Space. Further, the raid setup is also encrypted using dm-crypt luks. Now I want to redeploy each of the disks for new purposes. One of the disks i want to deploy exactly as before (keeping the partitions and content), however without being part of a raid array.

I've successfully deployed this disk into a new system and I am mounting the dev/sdb1 partition as dev/md0 because the disk is set to autodetect raid. Actually I am using cryptsetup and mounting with mapper. Can I get rid of the setting for auto detect on this partition without losing the data, or breaking the encryption? I just want to mount the partition as a standalone encrypted disk. Is it as simple as doing crypt setup luksOpen /dev/sdb1 then mounting it with mapper? Or do I need to change the partition in some way. Or do I simply continue to operate it as a 'broken' raid array?

View 2 Replies View Related

Ubuntu Installation :: Raid 0 - Two Hard Disk Array

Jul 8, 2010

What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?

View 1 Replies View Related

Ubuntu :: Rebuild Md RAID Array After OS Disk Failure?

Dec 19, 2010

I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?

View 2 Replies View Related

CentOS 5 :: Installing 5.4 With Multiple Raid Levels On A 4 Disk Array?

Nov 17, 2009

Our server is a CybertronPC I2XV9080 Imperium Tower. It is equipped with a supermicro X7DVL-I Motherboard and Quad 750 GB SATA2 RAID edition hard drives in a raid 5 array. We tried to install Centos on the Raid5 array with Device-Mapper as the LVM. In the BIOS SATA Raid was enabled and the ICH RAID code base option was set to [Intel].

Intel Matrix Storage Manager Option ROM V5.6.4.1002 ESB2
RAID
ID Name Level Strip Size Status Bootable
0 Raid5 Raid 5 64KB 80GB Normal Yes
1 Raid_5 Raid 5 64kB 2000GB Normal Yes[code].....

Can I have multiple level raids across the same array or would that lead to problems as above? Is the root cause of my problem the fact that intel raid5 is not supported for Linux as based on the following link http:[url]....

View 3 Replies View Related

Server :: Creating Backup Disk Image Of RAID 1 Array (MDADM)?

Oct 27, 2010

We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.

View 1 Replies View Related

Debian :: Disk Health Warning - Disk Part Of RAID5 Array

Feb 17, 2016

I received the following error when I got home from work today. If this was a windows environment, my first inclination would be to boot off my dvd and then run a chkdsk on the drive to flag any bad sectors that might exist. But there's a complication for me.

Code: Select allThis message was generated by the smartd daemon running on:
   host name:  LinuxDesktop
   DNS domain: [Empty]

The following warning/error was logged by the smartd daemon:
Device: /dev/sdc [SAT], 1 Currently unreadable (pending) sectors
Device info:
WDC WD5000AAKS-65V0A0, S/N:WD-WCAWF2422464, WWN:5-0014ee-157c5db9a, FW:05.01D05, 500 GB
For details see host's SYSLOG.

You can also use the smartctl utility for further investigation.The original message about this issue was sent at Sun Feb 14 13:43:17 2016 MST.Another message will be sent in 24 hours if the problem persists.

From gnome-disks
Code: Select allDisk is OK, 418 bad sectors (28° C / 82° F)

I did a bit of reading and it seems that most people suggest using badblocks to first get a list of badblocks from the drive and save it to a file. Then use e2fsck to then mark the blocks listed in the badblocks file as bad on the hard drive. My problem here is that this drive is part of a RAID5 array that hosts my OS. I wanted to confirm if this was still the correct process.I boot to my Live Debian disk, stop the raid array if it's active. Then run badblocks + e2fsck commands on the drive in question and then reboot.

View 1 Replies View Related

Debian Installation :: No Boot From RAID Array?

Dec 22, 2010

I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing

View 4 Replies View Related

Debian :: Setting Up A RAID Array With Multiple Partitions

May 23, 2011

I need to set up a RAID 1 array on Squeeze. I have 3 partitions: sda1 is root, sda5 is home, and sda6 is swap. (sda2 is the extended partition containing home and swap. This was a clean installation, so I don't know what happened to sda3 and sda4...)

All the information that I've been able to find recommends doing something like this:

mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

Do I need to type a separate command for each partition, or is there a better way to do it? Also, should I use the UUID instead of the dev names?

View 4 Replies View Related

Debian Hardware :: RAID As Multiple Disks - Configuring Array?

Dec 2, 2010

Alright, I have this issue on both SystemRescueCD and Debian Squeeze. I have an ASUS P5Q Turbo board that supports hardware RAID. If I configure an array and then start the Linux installer or boot the rescue CD, I get /dev/sda and /dev/sdb instead of an array. What gives? I need to start installing within the hour so I am desperate for an answer!

View 1 Replies View Related

Software :: Possible To Boot From Hard Disk Having RAID Partition?

Mar 28, 2010

I wanted to implement raid5 such that one partition is from my laptop's hard disk and others from other hard disks. After making one partition a raid partition, I rebooted the system. The computer stopped mid-way during booting, and brought me to the shell. On typing fsck -p, it told me an unexpected error occured in the partition which I had made for raid. Is there some condition that we cannot boot from a disk containing one of the raid partitions ?

View 1 Replies View Related

General :: Mount A Single RAID 1 Disk / Partition As Ext3?

Jul 7, 2011

I need to copy data from a single HD, which used to be part of a Linux RAID 1. I've googled around, but can't find any clue how to mount partitions from this single HD.

Background: The HD comes from a linux based NAS box Synology DS207+. The NAS uses ext3 as filesystem. Both NAS disks are fine, but the other NAS hardware is dead and not worth repairing or replacing.

View 1 Replies View Related

Slackware :: Building NAS - LVM / RAID Configuration For Partition

Jan 6, 2010

I'm building a NAS, based on the Intel SS4200. There are 4 drive bays in the machine for use with SATA disks, two of which I plan on filling now, the other two which I plan on filling later. The box also includes an IDE connector to which I will connect an 8GB Disk on Module onto which I will install Slackware. I wish to have all drives in the box show up as one contiguous volume. What partitioning/LVM/RAID configuration can I use which will allow me to:

1. Add a disk and transparently grow the available space of the volume?
2. Replace a disk with a larger disk and transparently grow the available space of the volume?
3. Lose a disk to hardware failure and replace it with a new one with no data loss?

If I use RAID 5, I'm pretty certain I can get numbers 1 and 3 above, but I'm not sure about number 2. The downside is that I'd have to start with 3 disks in the machine, and I'm unsure if adding a 4th disk whose size is larger than each of the 3 starting disks would lead to wasted space. For instance, if I start with three 1TB drives in RAID 5, and then add a 2TB 4th drive, would my available size go from 2TB to 3TB? Or from 2TB to 3.xTB?

Is it important in a RAID 5 setup to have all disks the same size? With LVM, I can certainly get number 1 above, but what about 2 and 3? I know you can use LVM to present many disks or partitions as one contiguous volume, but if I have two 1TB drives in one volume, and only have 300GB of data, then would the second drive remain empty until I broke the 1TB barrier? In this case, it's wasted space from the get go. I suppose another option would be to start with RAID 1 until I can afford a third disk.

When adding the new disk, could I switch to RAID 5 without data loss? I'm planning on maintaining a full mirror of the NAS on some USB disks as a backup, so if configuration changes to the NAS require wiping the disks and restoring from backup, it's not a total loss. However, it certainly makes me nervous to be in a state where only one copy of the data exists, so I'd rather find a solution where I can add and upgrade disks in the NAS without relying on the backup copies.

View 1 Replies View Related

Fedora X86/64bit :: Use Fedora / Linux Raid Program To Manage Raid Array?

Jun 24, 2009

I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.

I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??

View 2 Replies View Related

General :: Debian Software RAID 1- Boot From Both Disk

Mar 15, 2011

I newly installed debian squeeze with software raid. The way I did was, as also given in this thread.

- I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap)
- I selected the hard drive and created a new partition table
- I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable.
- Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /.
- Created another partion and used for swap

Then RAID configuration:
Through Configure RAID menu -> create MD device ->
(2 for the number of drives, 0 for spare devices)
Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot)
Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /)
And no RAID for swap partitions

'Finish Partitioning and write changes to disk' --> Finish the rest of the install like normal. Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok.

View 2 Replies View Related

Debian Configuration :: RAID - LVM - After Install ?

Aug 19, 2010

We've started using Debian based servers more and more in work and are getting the hang of it more and more every day. Right now I'm an ace at setting up partitions, software RAID and LVM volumes etc through the installer, but if I ever need to do the same thing once the system's up and running then I become unstuck.

Is there any way I can get to partman post-install, or any similar tools that do the same thing? Or failing that are there any simple guides to doing these things through the various command line tools?

View 4 Replies View Related

Debian Configuration :: Mixing Partitions In RAID 5

Mar 21, 2011

I have 2x 1.5TB hard disks and I'm going to buy a new 2TB drive soon. First though I just wanted to check that I could partition off the first 1/4 to 1/3 of the 2TB drive (leaving 1.5TB or more free) and install Debian to that part, then use the remainder of the disk in combination with the 2x 1.5 TB drives in RAID 5? i.e. can you mix whole drives and with partitions from other drives in RAID 5 and/or is it best to just stick with complete drives for the RAID array?I only have room for 3 drives in the small mATX case that houses my NAS device and I want to maximise storage capacity and minimise expense.

View 2 Replies View Related

Debian Installation :: Grub Rescue - Will Not Boot From Mdadm RAID - No Such Disk

Sep 19, 2014

I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.

Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".

Output from mdadm:
Code: Select all    # mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Wed Nov  7 17:06:02 2012
         Raid Level : raid6
         Array Size : 35160446976 (33531.62 GiB 36004.30 GB)
      Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB)
       Raid Devices : 14

[Code] ....

Output from blkid:
Code: Select all    # blkid
    /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"

[Code] ....

The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.

**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....

During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.

I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.

I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.

[URL] ....

One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.

This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.

I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.

View 14 Replies View Related

Debian Installation :: 8.0 - Need To Mount Raid Partition

Dec 21, 2015

Have a new debian install on a asus h170m-plus (was going to use ubuntu but didnt support the hardware/software combo i needed)

Install is fine. but during install it didnt see my 1tb raid1 drive..

after reboot, debain boots great, and i can mount the raid drive in the file manager.

I can see it and in mtab it shows up :

"/dev/md126 /media/user/50666249-947c-4e8f-8f56-556b713a6b6a ext4 rw,nosuid,nodev,relatime,data=ordered 0 0"

How can I permanently add this mount point so it is found at boot up at /data...

View 5 Replies View Related

General :: RAID At Root Partition In Debian ?

Feb 11, 2010

The RAID level 1 interested me because of its redundancy in both drives. And I successfully made it in a couple of partitions. But, I always did it after Linux installation. Then, I create both partitions, use 'mdadm' to create raidtab and RAID device (md0, for example) and then I format the RAID device with 'mkfs' and mount it.

Until there, it's all OK.

But my problem is to mirror ALL the hard disk, inclusive root partition. To do that, I guess I need no Linux installation, then create the RAID (md0, raidtab, etc) and after that install Linux in RAID device created.

But I'm new in Linux world and I have no idea how to do that.

I use Debian Lenny, so I need a solution that uses only the first DVD of this distribution.

View 10 Replies View Related

Debian Configuration :: RAID - Md0p1 Won't Automount At Boot

Mar 24, 2011

If you want, skip straight to the 'QUESTION' at the end of my post & refer to the 'EXPLANATION' later. EXPLANATION: Using Debian 6.01 Squeeze 64-bit. Just put together a brand new 3.3Ghz 6-core AMD. I had a nightmare with my Highpoint 640 raid controller, apparently because Debian Squeeze now handles raid through sysfs rather than /proc/scsi. The solution to this, of course, is to recompile the kernel with the appropriate module for /proc/scsi support. So I thought "screw that" and I've yanked out the raid card & went with Debians software raid. This allowed me to basically complete my mission. The raid is totally up and running, except for one final step... I can't get the raid to automount at boot.

My hardware setup;
- Debian is running totally on a 64Gb SSD. (sda)
- I have 3x 2Tb hard drives used for storage on a raid 1 array (sdc,sdd,sde)

[Code]....

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved