Server :: RAID 1 Partition Devices Disappear After Upgrade
Mar 3, 2011
I found a workaround of sorts. It looks like this is related to a 9.04 bug [URL] and the loopback workaround brings back the array. It is not clear how I will handle this long term.
Note: before using this technique, I used gparted to tag the partitions as "raid". They disappeared again on reboot, so I had to do it again. I am not sure how this is going to work out long-term
Note: I suspect some of this is related to the embedded "HOMEHOST" that is written into the RAID metadata on the paritions. The server was misnamed when first built and the name was changed later (cerebus -> cerberus) and the old name has surfaced in the name of a phanton device reported by gparted - /dev/mapper/jmicron_cerebus_root
I have a mythbuntu 9.10 system that I have upgraded from 8.10 to 9.04 to 9.10 in the last 2 days. I am on my way to 10.x, but need to make sure it works after every step.
The basic problem is that in its current incarnation, it is not recognizing the underlying partitions for one of the RAID devices, and therefore not happy.
As a 8.10 system I had 2 raid devices:
/dev/md16 -> /dev/sda5 and /dev/sdb5
/dev/md21 -> /dev/sdc1 and /dev/sdd1
/etc/fstab looked like this (in part):
/dev/md16 /var/lib xfs defaults 0 2
[Code]....
I don't *really* want to repartition the drive as there is a small amount of data loss between recent backups and what is on the drive, plus it would take me 2 days to move the data back.
View 2 Replies
ADVERTISEMENT
Dec 7, 2010
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
mdadm /dev/md2 -r failed
mdadm /dev/md2 -r detached
but neither work and since there is no block device listed I'm not quite sure how to get things back in sync so it's only seeing the three drives.
/dev/md2:
Version : 0.90
Creation Time : Tue May 25 11:07:04 2010
Raid Level : raid5
[code]....
View 8 Replies
View Related
Mar 24, 2011
It's been a while since I configured a raid and have been making some changes to my main workstation/server.
fdisk does not like md devices on my machine... always says it has an invalid partition table. While this is said to be normal all over the net, I don't feel warm and fuzzy about that fact. What is best practice these days, to create a non-partitionable md device or a partitionable mdp device?
If I create a partitionable md device, I would imagine it would look good in fdisk. However, I am concerned about growing the array afterward. I would then have to grow the array, redefine the partition, and then grow the file system. The PITA factor goes up. Has anyone worked with both? Pro/Cons? My array was created with:
mdadm --create --verbose /dev/md0 --level=5 --force --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
FYI: I have backups. I understand RAID 1 may be a better choice of raid level.
View 3 Replies
View Related
May 27, 2011
I installed Debian stable and I see these errors in the xsession error file
/etc/gdm3/Xsession: Beginning session setup...
GNOMEKEYRINGCONTROL=/tmp/keyring-j0E6Br
SSHAUTHSOCK=/tmp/keyring-j0E6Br/ssh
GNOMEKEYRINGCONTROL=/tmp/keyring-j0E6Br
[code]....
View 4 Replies
View Related
Aug 26, 2009
I am trying to partition my server before installing CentOS and cPanel.I have hardware RAID 1/10 On RAID 1 I would have
/
/boot
On RAID 10 I would have
/tmp
/usr
/var
/var/log
2 GB swap
/home
Is it correct partionining setup?What should I have on RAID 10 ?
View 1 Replies
View Related
Sep 1, 2011
Quote:
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.
View 2 Replies
View Related
Jul 13, 2010
I am trying to install Ubuntu Server 10.04 on a home server I am making. I have 3 1-TB drives set up in RAID 5 via my mobo (ASUS M2NPV-VM). The installation detects that I have a RAID array, but when it goes to partition, all it shows me is the usb stick I am installing from.
View 1 Replies
View Related
Nov 3, 2010
I'm trying to resize an NTFS partition on an IBM MT7977 Server. It has a Adaptec AIC-9580W RAID controller. I was thinking about doing it with a gparted LiveCD/LiveUSB, but then I realised that they won't have drivers for the RAID controller. A quick google for "9580W Linux" doesn't return anything promising.
View 3 Replies
View Related
Jun 7, 2011
I recently upgraded a server from Fedora 6 to Fedora 14. In addition to the main hard drive where the OS is installed, I have 3 1TB hard drives configured for RAID5 (via software). After the upgrade, I noticed one of the hard drives had been removed from the raid array. I tried to add it back with mdadm --add, but it just put it in as a spare. I figured I'd get back to it later.Then, when performing a reboot, the system could not mount the raid array at all. I removed it from the fstab so I could boot the system, and now I'm trying to get the raid array back up.
I ran the following:mdadm --create /dev/md0 --assume-clean --level=5 --chunk=64 --raid-devices=3 missing /dev/sdc1 /dev/sdd1I know my chunk size is 64k, and "missing" is for the drive that got kicked out of the array (/dev/sdb1).That seemed to work, and mdadm reports that the array is running "clean, degraded" with the missing drive.However, I can't mount the raid array. When I try:mount -t ext3 /dev/md0 /mnt/fooI get:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
[code]....
View 1 Replies
View Related
Mar 28, 2010
I want to upgrade from another distro to ubuntu server for a few reasons. The only problem is I have a lot of data that needs to survive. here is how my computer is setup. I've 5 drives on the computer,
A- 10gb drive for OS and swap only, no data
B,C,D,E - 4x 500 GB drives in a LVM. they make up one large drive with xfs and this volume has about 1.2 TB of data. there is nothing fancy on it, no encryption and no software raid of course the little 10gb drive can be formatted no problem, but the LVM needs to be migrated over intact.
View 5 Replies
View Related
Mar 30, 2010
I have created software raid 5 configurations on the second harddrive its working fine and i have edited fstab file for auto mounting when it reboot but when i reboot the computer raid doesn't work i have to re-create the arrays by typing "mdadm --create" command again and mount again manually ,is there anywhere i can do this once without retyping the commands again after rebooting and i am also using redhat 5
View 1 Replies
View Related
Apr 13, 2010
I installed Fedora 12 and performed the normal updates. Now I can't reboot and get the following console error message.
ERROR: via: wrong # of devices in RAID set "via_cbcff jdief" [1/2] on /dev/sda
ERROR: removing inconsistent RAID set "via_cbcff jdief"
ERROR: no RAID set found
No root device found
Boot has failed, sleeping forever.
View 14 Replies
View Related
May 23, 2010
I've got a Gentoo box that I'm interested in switching over to an Ubuntu box.
I currently have the partitions laid out using a mixture of RAID (mdadm) and LVM2, as specified in this document [1].
Ideally I'd like to just wipe out the non /home partition, as it's got data I'd like to keep.
Is it possible to reuse the current setup, or do I need to restart? vgdisplay, vgchange -a y, etc don't yield any results from the Ubuntu LiveCD, and I'm wary to run any commands that might wipe my data.
[1] [url]
View 1 Replies
View Related
Aug 31, 2010
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
[Code]....
View 11 Replies
View Related
Apr 17, 2010
The intention is to have this system dual-boot. When i first put it together, i decided to setup a raid5 array spanning 3 sata drives. I installed Windows 7 first, decided i'd get to Linux later. I left 150mb or so at the beginning of the array for /boot, and about 200gb at the end for my linux install. i'm getting to the linux install. My distro of choice is Fedora 12. I start the setup, and at the point where it's time to partition, the installer tells me that its unable to find any suitable storage devices.
I Crtl-Alt-F2 to a console, and fdisk -l. Fdisk reports three individual drives which all have partitions already. All have free space. None make sense. So i turned to google, and found some threads which explain that this chip doesn't run a true raid, rather its what's been referred to as fake raid. Which is that it depends on the windows driver in order to actually present the array to the OS, and that the best way to get by that on linux, is to break the array, and use LVM instead.
That's all well and good, but i lose two things in doing that. First i lose the resiliency of raid 5, and second, well, what does that do to my windows install? I've considered moving all of my data from windows to other machines, and then just starting from scratch, but i'd really much prefer a method of using the chips fake raid in linux. Is there a driver, or module which i can install to make this happen?
View 3 Replies
View Related
Mar 22, 2011
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
View 1 Replies
View Related
Aug 3, 2010
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
View 3 Replies
View Related
Oct 16, 2010
I'm setting up a raid 5 on several hard disks with a layer of lvm on top for good measure.I know the recent kernels support growing software raid, but since centos runs 2.6.18, I wanted to make sure it'll work. Does the centos kernel support growing raid devices?
View 1 Replies
View Related
Jul 2, 2011
Adding a kernel parameter to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nodmraid" (not 100% it should go there grub2 is new to me, but that is another story) I noticed the following error when running update-grub.
ERROR: ddf1: wrong # of devices in RAID set "ddf1_Series1" [1/2] on /dev/sdb No volume groups found
Now I have not a clue where it is getting ddf1_Series1 from.sdf1 is part of a RAID1 group that has mdadm RAID1 > luks > LVM
md6 : active raid1 sdi1[1] sdf1[0]
1465135936 blocks [2/2] [UU]
bitmap: 0/175 pages [0KB], 4096KB chunk
Errors bug me.. as I am new to grub2 was wondering if anyone has an insight into the error / where to investigate Still reading the grub2 / grub manuals.
View 3 Replies
View Related
Mar 27, 2010
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
View 4 Replies
View Related
Jan 2, 2011
My USB devices are being detected when I run the 'lsusb' command.However there is no driver that is set to the device.Also the /dev/sdX device is created so there is a mount point
View 1 Replies
View Related
May 11, 2015
There seems to be no documentation on how to automount partitions and USB devices under systemd in Jessie. (Overall, systemd entirely lacks any useful documentation or GUI configuration tools -- all very cryptic and hidden.)
I created custom files to enable automounting. I put them in /etc/systemd/system -- this may not be the right place, but it works.
Kernel note:
This does not work under the old Wheezy kernel linux-image-3.2.0-4.
To automount my Windows partition so I can access its files, I created:
/etc/systemd/system/media-windows.mount
The name of the file must match the mount point -- in this case, /media/windows
My file notes the device and file type, plus an fmask option so all the Windows files don't seem to be executable:
[Unit]
Description = windows mount to /media/windows
[Mount]
What=/dev/sda1
Where=/media/windows
Type=ntfs-3g
Options=fmask=111
[Install]
WantedBy=multi-user.target
The file ownership must be root.root. Apparently it doesn't need to be executable.
After creating, enable with:
sudo systemctl enable media-windows.mount
and it will mount on the next boot.
I read elsewhere that the before running the enable command you should run a start command:
sudo systemctl start media-windows.mount
but that didn't work for me.
View 2 Replies
View Related
Jul 21, 2011
I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?
View 2 Replies
View Related
Mar 24, 2011
I have a box that doesn't have a Raid controller or a software raid running currently. I would like to make it a RAID 1. Since it seems there isn't any IDE RAID controllers hardly around, I have another HD that is the exact model as the drive currently in the box running CentOS. Can I some how add the second drive and get the box to mirror from here own out? The box gets really hot and I want to be ready for a HD failure.
View 3 Replies
View Related
Feb 1, 2011
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
View 4 Replies
View Related
Dec 2, 2010
I'm no expert on DHCP. My problem is that i have a CentOS 5.5 server on which i want to install a DHCP server. I have two NICs where eth0 has access to the internet and where eth1 should act as an DHCP server.
I have installed dhcpd and this is how my dhcpd.conf file looks like.
Code:
ifconfig looks like this
Code:
When i start dhcpd on eth1 i get no error messages but when i connect any devices to eth1 they don't get any IP. I cant find anything in any logs about devices trying to get an IP address. I dont have any firewall rules in iptables.
View 4 Replies
View Related
Mar 1, 2010
I have just installed a new version of fedora 12 64bit and have setup the following drive configuration -
Code:
But for some reason I am getting dm-1 and dm-3.
How can I change the array so it contains the normal /dev/sda and /dev/sdb disks and partitions?
Why is the system telling me my drives are dm devices? I have no bios raid features on the motherboard.
How do I use md-1 and md-3 partitions?
How do I go back to the traditional /dev/sda and /dev/sdb devices?
By the way the system tells me that /dev/sda1 and /dev/sdb1 do not exist.
View 1 Replies
View Related
Feb 11, 2010
I have two hard drives inside an Ubuntu 9.10 machine, each with a data partition on. What I want to do is as I save files to one data partition partition (or delete them) I want the files copying/deleted automatically on the the other data partition. I do not want to start using RAID or end-of-day incremental backups.
View 5 Replies
View Related
Sep 6, 2010
How do I recreate an LVM raid 1 partition, without destroying data on the discs? I have a 650GB data partition which is a raid 1 array with ext3. Two days ago, the system (Ubuntu 9.04) started to refuse to write to it, claiming no space left on device - even if there is ca. 102GB free left, if the disk is 85% full (according to df)! Interestingly, removing a couple of GB did not help, after reboot the disk was again full..
I did the "tune2fs -m 0" trick and then forced file check on next reboot by "sudo touch /forcefsck" .. and the result is that the raid partition is gone. I have the two physical drives /dev/sd*, unmounted, but the /dev/md1 is no longer there. What can I do to re-create it, without losing the data? I realized that I ran tune2fs on the physical partitions /dev/sd* - was I supposed to run it on /dev/md1?
View 1 Replies
View Related
Mar 1, 2010
I have just installed a new version of fedora 12 64bit and have setup the following drive configuration -
Code: [root@fedora ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdc1[0] sdd1[1]
312568576 blocks [2/2] [UU]
[Code]....
but for some reason I am getting dm-1 and dm-3.
How can I change the array so it contains the normal /dev/sda and /dev/sdb disks and partitions?
Why is the system telling me my drives are dm devices? I have no bios raid features on the motherboard.
How do I use md-1 and md-3 partitions?
How do I go back to the traditional /dev/sda and /dev/sdb devices?
By the way the system tells me that /dev/sda1 and /dev/sdb1 do not exist.
View 1 Replies
View Related