Debian Configuration :: Can't Correctly Install Grub On RAID Array
Nov 26, 2015
I'm having issues with a RAID array.
Setup is like this:
Debian Jessie, 2 hard disks, each having 2 partitions: /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2. Partitions were paired during installation, so they form /dev/md0 and /dev/md1. /dev/md0 is the root (/) partition, /dev/md1 is for /home.
At the end of the install process, I chose /dev/sda1 to carry Grub. And I think this is where I screwed things up.
After removing one of the hard drives, there was no boot capability. So, I installed Grub on /dev/sdb, too.
Now it displays the boot menu but cannot find the kernel. This is where I got lost in the process.
Do I need to reinstall the OS or is there a way to fix it? I suppose I have to edit Grub.
View 0 Replies
ADVERTISEMENT
Aug 29, 2015
Just setup with Debian 8 (LXDE) a few weeks ago. Raid10 array was preexisting.
Was working well. After booting I would need to go to the save as then would need to enter the root password and everything would be good.
Can't access the array.
Used to use the command $ mount /dev/dm-o /home/myspace/folder under Debian 7.6 to mount the array (no longer works).
blkd lists a /dev/md0 but instead of UUID it is PTUUID
[Code] .....
View 0 Replies
View Related
Mar 4, 2010
I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.
However, mdadm man page states:
"Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."
So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.
RAID CONFIGURATION:
~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 4 17:38:26 2006
Raid Level : raid1
Array Size : 238950720 (227.88 GiB 244.69 GB)
View 3 Replies
View Related
Aug 31, 2010
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
vs.
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
What's the advantage of using one over the other?
View 3 Replies
View Related
Jul 2, 2011
New to linux in general and am having issues on setting up a Raid 1 array for two disks on an HP Proliant Microserver which I am looking to be accessible from my windows PC. I have installed the latest version of debian succesfully on a 250GB disk that came with the server. I have added 2 2TB disks which I would like to have in a RAID 1 array and to have visible from windows to store music/videos etc on. I have managed to partition the two disks to FAT32 (which I think is best) and have managed to configure the array so that it shows as active when I use cat /proc/mdstat. I have been following the steps in this article [URl]... squeeze-p2 and trying to adapt it to my situation.
I am stuck on the step to create the file systems using the mkfs command. I try mkfs.vfat /dev/md0 and it comes up with the error mkfs.vfat: command not found. I have tried mkfs -t vfat /dev/md0 and it give the error "mkfs.vfat: No such file or directory" So my question is how can I continue with the process of setting up the array? Or maybe I should be asking is it possible to set up an array with FAT32 formatted disks?
View 2 Replies
View Related
Feb 3, 2011
I'm trying to switch to a new RAID5 array but can't get it to boot. My disks:/dev/sda: new RAID member
/dev/sdb: Windows disk
/dev/sdc: new RAID member
/dev/sdd: old disk, currently using /dev/sdd3 as /
The RAID array is /dev/md0, which is comprised of /dev/sda1 and /dev/sdc1. I have copied the contents of /dev/sdd3 to /dev/md0, and can mount /dev/md0 and chroot into it. I did this:
Code:
sudo mount /dev/md0 /mnt/raid
sudo mount --bind /dev /mnt/raid/dev
sudo mount --bind /proc /mnt/raid/proc
[code]....
This completes with no errors, and /boot/grub/grub.cfg looks correct[EDIT: No it doesn't. It has root='(md/0)' instead of root='(md0)']. For example, here's the first entry:
Code:
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Ubuntu, with Linux 2.6.35-25-generic' --class ubuntu --class gnu-linu
x --class gnu --class os {
[code]....
However, when I try to boot from /dev/sda, I get:
Code:
error: file not found
grub rescue>
View 9 Replies
View Related
Jun 6, 2011
I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:
The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.
As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know
1) why this broke in an upgrade and is it a known problem?
2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.
View 9 Replies
View Related
Aug 19, 2010
We've started using Debian based servers more and more in work and are getting the hang of it more and more every day. Right now I'm an ace at setting up partitions, software RAID and LVM volumes etc through the installer, but if I ever need to do the same thing once the system's up and running then I become unstuck.
Is there any way I can get to partman post-install, or any similar tools that do the same thing? Or failing that are there any simple guides to doing these things through the various command line tools?
View 4 Replies
View Related
May 28, 2011
I've recently had trouble reinstalling my Ubuntu system as I was getting various unusual errors as described in my old thread here. I thought it was probably something to do with my RAID-0 array which was pre-installed on my laptop from purchase being corrupted or something like that (if it's possible). I decided to simplify things for myself (not understanding RAID arrays much) so I just removed the RAID array and installed Windows and Ubuntu on the now separate hard disks. It worked fine.
I noticed quite a significant performance drop, however, with even Ubuntu boots taking longer than 30 seconds despite my laptop being both high-spec and only a few months old. Windows, as you can imagine, was dreadfully slow. I wasn't entirely convinced that this was entirely due to the loss of the RAID array - as even low-spec laptops with presumably no RAID arrays are supposed to boot Ubuntu in under 30 seconds apparently - but I read that RAID-0 arra
View 8 Replies
View Related
Aug 4, 2010
I'm trying to install Debian Lenny on my new Dell XPS 8100 Desktop with 2 x 1To SATA HD. (No Windows or any other OS install is present on the system) The Bios allows me to change the SATA mode to either "ATA" or "RAID"
- When SATA mode is set to RAID, the installation goes without issues, but when it comes to load into the system, I've got that Stage 1.5 Grub Loading... Error 2 problem. I assume this is due to the Bios "RAID" configuration. I then switched the SATA mode to "ATA" in the Bios and now I can see the menu that allows me to boot my debian install but that part actually fails too saying "ALERT /dev/sda1 does not exist"
- When SATA mode was set to ATA, I tried to re-install the system but this time my drive was not recognized by the installer: "No common CD ROM drive"
View 2 Replies
View Related
Dec 22, 2010
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing
View 4 Replies
View Related
May 23, 2011
I need to set up a RAID 1 array on Squeeze. I have 3 partitions: sda1 is root, sda5 is home, and sda6 is swap. (sda2 is the extended partition containing home and swap. This was a clean installation, so I don't know what happened to sda3 and sda4...)
All the information that I've been able to find recommends doing something like this:
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
Do I need to type a separate command for each partition, or is there a better way to do it? Also, should I use the UUID instead of the dev names?
View 4 Replies
View Related
Sep 27, 2010
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB
Am I missing something or making a false assumption somewhere?
View 4 Replies
View Related
Dec 2, 2010
Alright, I have this issue on both SystemRescueCD and Debian Squeeze. I have an ASUS P5Q Turbo board that supports hardware RAID. If I configure an array and then start the Linux installer or boot the rescue CD, I get /dev/sda and /dev/sdb instead of an array. What gives? I need to start installing within the hour so I am desperate for an answer!
View 1 Replies
View Related
Mar 16, 2011
I'm trying to reassemble a broken raid 5 array. The array has 3 drives. Here is some output from 'mdadm --examine': hutch:/home/sam # mdadm --examine /dev/sdb1 /dev/sdb1: - [URL]
View 1 Replies
View Related
Mar 27, 2010
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
View 4 Replies
View Related
Jun 11, 2011
I am currently running all my applications off a HD as I was unable to install the grub bootloader on my ocz pci express card (grub won't install on the pci express card as it is a raid0 array). I would like to use the HD for backup only and run everything off the ocz card - with the exception of booting (which is unfortunate but I didn't manage to make the pci express card boot). How is it possible to tell suse during the installation to create the /boot on the HD and the rest on the pci express card and also to allocate the remainder of the HD as empty storage area??
View 1 Replies
View Related
Apr 4, 2010
I'm about to install Ubuntu on two 250-gigabyte hard drives in a RAID 1 array, but I'm confused about how to partition my hard drives. How much space should I give to each partition? How many partitions should I create and where should I mount them? (I should mention that Ubuntu will be the only OS on this array.)
View 3 Replies
View Related
Jun 24, 2009
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
View 2 Replies
View Related
Mar 21, 2011
I have 2x 1.5TB hard disks and I'm going to buy a new 2TB drive soon. First though I just wanted to check that I could partition off the first 1/4 to 1/3 of the 2TB drive (leaving 1.5TB or more free) and install Debian to that part, then use the remainder of the disk in combination with the 2x 1.5 TB drives in RAID 5? i.e. can you mix whole drives and with partitions from other drives in RAID 5 and/or is it best to just stick with complete drives for the RAID array?I only have room for 3 drives in the small mATX case that houses my NAS device and I want to maximise storage capacity and minimise expense.
View 2 Replies
View Related
Oct 3, 2010
I was trying to get the Windows one working again. Here's what fdisk -l reads:
[Code]...
I'll change these or do some grub configurations, if anyone knows what ones can work.
View 1 Replies
View Related
Feb 20, 2016
How can i get a RAID virtual image disk?
What i need is to mount several directories from any other partiton (or file system) as a new merge file system that can grow or decrease depending on the free space. As if it was a dinamic RAID,so i can work with huge files distributed over the partitions mounted.
Ejemp: /mnt/sda1/dir_raid1 + /home/dir_raid2 + /mnt/sda3/dir_raid3 ---> /mnt/RAID/
mhddfs and unionfs <---- are not the solution im searching (cant use huge files)
View 0 Replies
View Related
Mar 24, 2011
If you want, skip straight to the 'QUESTION' at the end of my post & refer to the 'EXPLANATION' later. EXPLANATION: Using Debian 6.01 Squeeze 64-bit. Just put together a brand new 3.3Ghz 6-core AMD. I had a nightmare with my Highpoint 640 raid controller, apparently because Debian Squeeze now handles raid through sysfs rather than /proc/scsi. The solution to this, of course, is to recompile the kernel with the appropriate module for /proc/scsi support. So I thought "screw that" and I've yanked out the raid card & went with Debians software raid. This allowed me to basically complete my mission. The raid is totally up and running, except for one final step... I can't get the raid to automount at boot.
My hardware setup;
- Debian is running totally on a 64Gb SSD. (sda)
- I have 3x 2Tb hard drives used for storage on a raid 1 array (sdc,sdd,sde)
[Code]....
View 2 Replies
View Related
Mar 13, 2010
I just expanded my raid 5 array from 3*2TB to 4*2TB and mdadm made the grow successfully and shows an md0 dev with the size of 6TB usable data. Now my problem is that Debian (Lenny) dosn´t show the right amount. See below
######### MDADM DETAILS OF ARRAY ##########
> mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Dec 14 22:30:46 2009
Raid Level : raid5
[Code].....
View 3 Replies
View Related
Mar 29, 2010
Have my proxy running on Lenny and tried to upgrade to squeeze. Originally the system was installed on Etch and upgrading to Lenny was no problem. In the system i have two RAID1 volumes, md0 for / and md1 for /home. For upgrading i added the sources to my apt.conf and startet dist-upgrade. During the installation procedure, when installing udev I was advised to install the new kernel first and continue upgrade after booting the kernel. so I installed the kernel by "apt-get install linux-image-2.6-686. When generating initramfs there was a message, that there are no arrays defined in /etc/mdadm/mdadm.conf I took a look and there were none. mdadm seems to have been update before.
I then added the lines for RAID definition and added the data for UUID The UUID I got from the output of "mdadm --detail /dev/md0"
What I don't understand: blkid gives the same UUIDs for the first partitions of the RAID and a different UUID for /dev/md0 and /dev/md1 than mdadm --detail The update of initramfs for kernel 2.6.32 then gives this result:
update-initramfs -u -k 2.6.32-3-686
update-initramfs: Generating /boot/initrd.img-2.6.32-3-686
W: Possible missing firmware /lib/firmware/e100/d102e_ucode.bin for module e100
W: Possible missing firmware /lib/firmware/e100/d101s_ucode.bin for module e100
[code].....
View 12 Replies
View Related
Aug 12, 2010
I've got an 8-disk raid-5 setup, and one of the disks failed. I shut the system down, replaced it, and powered the box back on again. Then, I made a catastrophic mistake; I 'failed' and removed the wrong disk (should have been sdj1, and I typed sdk1 by accident). I tried to re-add sdk1 back to the raid array, but it got listed as 'spare'. My raid array is off-line, since I now have 2 disks unavailable.
I know that the data still exists on sdk1, is there any way I can get the raid array to recognise the fact that it's a valid part of the array, and not a spare disk? At least if I can do that, I'll have a degraded but accessible array, and then I can rebuild the array on the properly replaced disk.
View 7 Replies
View Related
Sep 19, 2014
I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.
Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".
Output from mdadm:
Code: Select all  # mdadm -D /dev/md0
  /dev/md0:
      Version : 1.2
   Creation Time : Wed Nov 7 17:06:02 2012
     Raid Level : raid6
     Array Size : 35160446976 (33531.62 GiB 36004.30 GB)
   Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB)
    Raid Devices : 14
[Code] ....
Output from blkid:
Code: Select all  # blkid
  /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
  /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
  /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"
[Code] ....
The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.
**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....
During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.
I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.
I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.
[URL] ....
One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.
This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.
I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.
View 14 Replies
View Related
Oct 19, 2010
Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?
View 4 Replies
View Related
Jul 18, 2011
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
View 2 Replies
View Related
Dec 4, 2010
I had a RAID1 array with two disks. I wanted to remove one of the disk and replace it by a new one. I first expanded the array from 2 to 3 components and let RAID rebuild completely. When this was done I used Gnome disk utility (palimpsest) to remove the component on the disk I want to get rid of. It looks like palimpsest did mess up somewhere because the array is no displaying as degraded. How can I get rid of the "removed" entry (was /dev/sdc1) in the RAID array (and hopefully have it come in Running state)?
#/sbin/mdadm --detail
Raid Level : raid1
Array Size : 732570841 (698.63 GiB 750.15 GB)
[code].....
View 1 Replies
View Related