Ubuntu Installation :: Migrate Working Single Disk System To Existing RAID Array Using Disk UUIDs
Aug 1, 2010
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
View 2 Replies
ADVERTISEMENT
Oct 16, 2009
my Fedora 11 system is not starting anylonger. It stops with the message:
Code:
VFS: Can't find ext4 filesystem on dev dm-0
The system told me since a while, that a lot of the sectors of one disk of the (software) RAID compound are failed already. So tried to disconnect each of the disks and start them separately. Unfortunaltly this is not working (for one its is not working at all, the other wents the same far as with both), when I tried to recover the system with the Fedora DVD, it said no distribution found. I am quite new and do not know so much about linux system, so i do not know what further information you could need. Maybe it can be important, that both disks are encryped (the system wents so far, that I can type in the password).
View 2 Replies
View Related
Jul 8, 2010
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
View 1 Replies
View Related
Sep 27, 2010
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB
Am I missing something or making a false assumption somewhere?
View 4 Replies
View Related
Mar 12, 2011
I've read many of the postings on ICH10R and grub but none seem to give me the info I need. Here's the situation: I've got an existing server on which I was running my RAID1 pair boot/root drive on an LSI based RAID chip; however there are system design issues I won't bore you with that mean I need to shift this RAID pair to the fakeraid (which happens to most reliably come up sda, etc). So far I've been able to configure the fakeraid pair as 'Adaptec' and build the RAID1 mirror with new drives; it shows up just fine in the BIOS where I want it.
Using a pre-prepared 'rescue' disk with lots of space, I dd'd the partitions from the old RAID device; then I rewired things, rebooted, fired up dmraid -ay and got the /dev/mapper/ddf1_SYS device. Using cfdisk, I set up three extended partitions to match the ones on the old RAID; mounted them; loopback mounted the images of the old partitions; then used rsync -aHAX to dup the system and home to the new RAID1 partitions. I then edited the /etc/fstab to change the UUID's; likewise the grub/menu.list (This is an older system that does not have the horror that is grub2 installed) I've taken a look at the existing initrd and believe it is all set up to deal with dmraid at boot. So that leaves only the grub install. Paranoid that I am, I tried to deal with this:
dmraid -ay
mount /dev/mapper/ddf1_SYS5 /newsys
cd /newsys
[code]....
and I get messages about 'does not have any corresponding BIOS drive'. I tried editing grub/device.conf, tried --recheck and any thing else I could think of, to no avail. I have not tried dd'ing an mbr to sector 0 yet as I am not really sure whether that will kill info set up by the fakeraid in the BIOS. I might also add that the two constituent drives show up as /dev/sda and /dev/sdb and trying to use either of those directly results in the same error messages from grub. Obviously this sort of thing is in the category of 'kids don't try this at home', but I have more than once manually put a unix disk together one file at a time, so much of the magic is not new to me.
View 2 Replies
View Related
Dec 19, 2010
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?
View 2 Replies
View Related
Feb 2, 2010
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
View 3 Replies
View Related
Aug 31, 2010
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
vs.
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
What's the advantage of using one over the other?
View 3 Replies
View Related
Nov 17, 2009
Our server is a CybertronPC I2XV9080 Imperium Tower. It is equipped with a supermicro X7DVL-I Motherboard and Quad 750 GB SATA2 RAID edition hard drives in a raid 5 array. We tried to install Centos on the Raid5 array with Device-Mapper as the LVM. In the BIOS SATA Raid was enabled and the ICH RAID code base option was set to [Intel].
Intel Matrix Storage Manager Option ROM V5.6.4.1002 ESB2
RAID
ID Name Level Strip Size Status Bootable
0 Raid5 Raid 5 64KB 80GB Normal Yes
1 Raid_5 Raid 5 64kB 2000GB Normal Yes[code].....
Can I have multiple level raids across the same array or would that lead to problems as above? Is the root cause of my problem the fact that intel raid5 is not supported for Linux as based on the following link http:[url]....
View 3 Replies
View Related
Mar 25, 2010
Can RAID be implemented on a single hard disk ? If yes, plz give a link for it.
View 2 Replies
View Related
Oct 27, 2009
If you want a full run down as to WHY I want to do this, read here: webhostingtalk.com/showthread.php?t=899909Basically, my ISP could not get my server running stable on a simple raid 1 (or raid 5) so what it came down to was having them install my system on a single disk. I don't exactly like this, main reason being, if the system (or HDD) crashes, I'll end up with another several hours of down time..
View 1 Replies
View Related
Aug 1, 2011
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
[code]....
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
View 6 Replies
View Related
Oct 27, 2010
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
View 1 Replies
View Related
Jul 7, 2011
I need to copy data from a single HD, which used to be part of a Linux RAID 1. I've googled around, but can't find any clue how to mount partitions from this single HD.
Background: The HD comes from a linux based NAS box Synology DS207+. The NAS uses ext3 as filesystem. Both NAS disks are fine, but the other NAS hardware is dead and not worth repairing or replacing.
View 1 Replies
View Related
Feb 17, 2016
I received the following error when I got home from work today. If this was a windows environment, my first inclination would be to boot off my dvd and then run a chkdsk on the drive to flag any bad sectors that might exist. But there's a complication for me.
Code: Select allThis message was generated by the smartd daemon running on:
host name: LinuxDesktop
DNS domain: [Empty]
The following warning/error was logged by the smartd daemon:
Device: /dev/sdc [SAT], 1 Currently unreadable (pending) sectors
Device info:
WDC WD5000AAKS-65V0A0, S/N:WD-WCAWF2422464, WWN:5-0014ee-157c5db9a, FW:05.01D05, 500 GB
For details see host's SYSLOG.
You can also use the smartctl utility for further investigation.The original message about this issue was sent at Sun Feb 14 13:43:17 2016 MST.Another message will be sent in 24 hours if the problem persists.
From gnome-disks
Code: Select allDisk is OK, 418 bad sectors (28° C / 82° F)
I did a bit of reading and it seems that most people suggest using badblocks to first get a list of badblocks from the drive and save it to a file. Then use e2fsck to then mark the blocks listed in the badblocks file as bad on the hard drive. My problem here is that this drive is part of a RAID5 array that hosts my OS. I wanted to confirm if this was still the correct process.I boot to my Live Debian disk, stop the raid array if it's active. Then run badblocks + e2fsck commands on the drive in question and then reboot.
View 1 Replies
View Related
Apr 9, 2010
This is the third 9.10 install to do this on two different laptops, so wondering what's up...
In both cases, the goal was to leave a large chunk of unpartitioned disk after the Ubuntu partitions, for a second OS install or a filesystem Ubuntu cannot create like NTFS.
When I install with manual partitions, the system can't boot and asks for me to insert a system disk and press any key. When I reinstall telling Ubuntu to "use the entire disk" it then works.
First laptop, first try:
Remainder of the 500GB disk is free space.
Fails to boot, "insert system disk".
First laptop, second try without the /boot partition:
Remainder of the 500GB disk is free space.
Fails to boot, "insert system disk".
"use entire disk" works perfectly.
Second laptop, first try:
Same thing, non-system disk or disk error, insert system disk.
Second try "use entire disk" is currently in progress but I expect the same to happen.
View 3 Replies
View Related
Jun 28, 2010
I have an SiI hardware SATA RAID card, with two 500GB disks in mirrored RAID configuration. When I first plugged them in and set it up, things seemed to work ok, but on boot the raid controller told me that the RAID needed rebuilding, and it would happen automatically after POST. So I didn't worry about it, and the drive mounted fine, and it's been that way for years. I just went in and manually on-line rebuilt the RAID in the controller's BIOS, and now when I boot into Ubuntu, both disks show up in fdisk, but neither show up in /dev/disk/by-uuid. Am I missing something?
View 9 Replies
View Related
Jun 29, 2011
migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 Replies
View Related
Jan 22, 2011
After installation of debian, using the squeeze net-installer, on a HP elitebook 6930P, i get the following error. "non-system disk or disk error"
It is right after boot process, and just when it should load grub. Grub is installed in the MBR. Windows7, is installed as well, and is not an option to remove. (Should not be the problem though).
/ is set with the bootable flag.
The installation went without any issues, and I have actually tried to install twice with the exact same thing.
View 6 Replies
View Related
Feb 27, 2011
My system decided to crash on me, hard. It was humming along happily for about 2 months and now doesn't boot. If I boot from hard-disk, I get grub. Launching the first kernel choice hangs. I thought maybe the install was corrupt, so I booted from usb install disk. The usb hdd didn't boot; something about an error trying to access /dev/sda . Unplugging the internal disk and plugging in the usb install disk does result in the system booting. Plugging in the internal disk in a running system usb-booted system does not result in the system detecting the disk.
How do I know if the disk is physically broken? This seems unlikely since it does manage to launch grub consistently. Or is this still possible? How can I try to mount whatever is left? The usb install disk doesn't even list the /dev/sd*. Any pointers on how to reformat the drive if it's not being mounted?
View 1 Replies
View Related
Sep 21, 2010
I have a IBM x306 server with on board RAID1 controller. One of the two disk is dead, I need to replace it. I'm using the "ServRaid Manager" to reconfigure the array. I shut down the server and replaced the dead disk with new one, I first started the server and not doing anything manually - thinking that it will auto sync, but it just didn't! I then re-started the server and logged in with ServRaid Manager. Now I want to reconfigure the array so that the new disk will sync. But the problem is that in this arrays property, the "Synchronize" option is grayed out, I just cant click on it! On the other hand on the newly inserted disk's property I was able to click "re-build", but after this, things just stopped there, its not synchronizing. How do I simply replace this new disk and have things working?
View 8 Replies
View Related
Mar 7, 2011
I am trying to migrate my existing system with one IDE disk , tools installation already done... without loosing informations and having to install once again every things, to RAID1 (soft) with a second IDE disk I tried to do this using somme informations given on forums but i always have a kernel Panic at the end of boot What I did:
The system is going down for system halt NOW!
login as: root
root's password:
/usr/bin/xauth: creating new authority file /root/.Xauthority
[code]....
View 8 Replies
View Related
Jan 8, 2011
I have reinstalled XP and conseqently messed up Grub and lost Ubuntu. I am trying to do a fresh install but the installer insists on trying to overwrite the whole disk. I downloaded the alternate instal ISO as this has got over this problem in the past but this also wanted to overwrite the whole disk. It recognises the Sata Raid array as being nfts (this is my main data disk) but it doesn't recognise the existing partitions on my main disk:
18G windows
18G Old Ubuntu
113G nfts data disk
View 8 Replies
View Related
Sep 23, 2011
I have a home samba server with a 3ware Escalade 8506-8. I have 5 x 500 gig hard drives in a RAID 5 array. Recently, my 8506 died and I need to get a new one. However, I saw a 3ware Escalade 9500S-12 on ebay for about $20.00 dollars more than a replacement 8506-8.
My question is, if I put my drives on the 9500S, will it recognize my existing RAID array? Or will it want to build a new RAID array and format all of my data?Hope I have asked this question clearly, little short on sleep this week.
View 3 Replies
View Related
Dec 23, 2010
I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).
Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?
View 1 Replies
View Related
Oct 11, 2010
I've googled my problem but I'm not sure I can find an answer in layman's terms. So here's my noob, simple question, please answer it in semi-noob-friendly terms I've been trying to install ubuntu for a while on my desktop pc. I gave it another go with 10.10 but I always have the same problem:
I've got two raid sets connected to an ich10r chip and they work fine in windows (2 samsung 1to + 2 raptors 75gb). Upon installation, dmraid only sets up the first raid set (Samsung array) but not the second one (Clean raptors intended for ubuntu). I don't have any other installation option, all my sata connectors are unavailable. So, is there a manual install solution? Can I force dmraid to mount the second raid set and not the first one? I think I read somewhere that this was a dmraid bug, but I can't find it anymore.
View 8 Replies
View Related
Sep 8, 2015
I'm trying to upgrade my Win8/Wheezy 64-bit machine to Jessie 8.1 by installing from the amd64-bit netinstall iso image on a USB flash drive. I had done the previous, Wheezy, install on a disk partition that was whole-partition LUKS/LVM drive, with separate logical partitions for swap, root, and home.
Before doing the upgrade, I booted to the BIOS to ensure that my UEFI system had the correct, CSM and Legacy modes enabled in it, so that installer would boot using the non-efi BIOS mode.
Step one of the upgrade was to boot the netinstall and enter the rescue mode so that I could manually do the cryptsetup/LVM business. When I returned to the installer, I mounted the now-recognized logical partitions normally, choosing to format only the swap and / partitions.
During the entire process, I had to go into rescue mode one more time to manually mount the unencrypted /boot partition, along with my /home partition. I copied a backup of my old /etc/crypttab from the latter, and after returning to the installer, finished the install. That finish included installing grub on my hard drive's main boot partition.
Everything seemed to finish with no problems. However, when I try to boot the debian bootloader, I get tossed to grub rescue with the message that '/grub/x86_64-efi/normal.mod' doesn't exist. At this point I returned to the installer, mounted the /boot partition, and saw that there grub-install didn't create that an x86_64-efi directory at all. Instead, it had created an i386 directory. The exact name escapes me at the moment.
I *think* that my install was clean other than the last bit that was related to installing the bootloader. How to reinstall the bootloader in such a way as to make all of this work.
View 2 Replies
View Related
Mar 10, 2010
I want to install Ubuntu x86_64 or x86 to my computer.
I used Dekstop and Server Editions on other machines, installed succesfully but i could not install Ubuntu to my computer.
My hardwares are;
AMD Phenom II X4
Gigabyte GA-MA790GP-DS4h [SB750 - AMD AHCI Compatible RAID Controller]
2 x 250GB Seagate ST3250410AS @ Raid0
I installed Windows succesfully and i created 50GB partition for Ubuntu.
I tried to install Ubuntu, but disks are not detected in partition managing screen.
how can i install ubuntu?
View 5 Replies
View Related
Mar 24, 2010
My system is installed on my main hd. Is possibile, if i buy a new hd, to setup a Software RAID, so with old and new hd without reinstall ubuntu?
View 2 Replies
View Related
Jan 5, 2011
I am using Ubuntu 10.10. I have a system set up with 1tb HD. I also have another 1tb HD which I'd like to use to mirror the other drive. So if the primary HD fails I can boot and operate from the mirrored drive. I've read that this is possible by using Raid. however I am confused if it is possible to set-up with a HD which is already set-up Ubuntu system. Also what what I can make out the mother board does not have a raid option.
View 2 Replies
View Related