I've got 3 extra disks on OpenSuse 11.2 - all the same size. I've created a partition on all of them as type 0xFD. If I then try and add raid in yast I get "There are not enough suitable unused devices to create a RAID."
I have a Dell Studio XPS running openSUSE 11.2 with dual mirrored disks (using Dell's SATA controller). Does anyone know how I can set up automatic monitoring of the disks so that I will be informed if either fail? I think smartd might be what I need here. Is that correct? I added: /dev/sda -a -d sat -m <my email> /dev/sda -a -d sat -m <my email>
smartd is running, but how do I know that it will report what I need? I also have a client with a Dell PowderEdge SC440 with SAS 5/iR also running openSUSE 11.2. They also require automatic monitoring. There doesn't seem to be a SAS directive for smartd. I notice that the newer release says it does support SAS disk. I upgraded to 5.39. On restart (with DEVICESCAN as the directive) I get the following in /var/log/messages for my SAS RAID disk.
Sep 18 10:47:26 harmony-server smartd: Device: /dev/sdb, Bad IEC (SMART) mode page, err=4, skip device I ran: smartctl -a /dev/sdb and got the result: smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (openSUSE RPM) Copyright (C) 2002-10 by Bruce Allen, smartmontools
Device: Dell VIRTUAL DISK Version: 1028 Device type: disk Local Time is: Sat Sep 18 11:32:08 2010 JST Device does not support SMART Error Counter logging not supported Device does not support Self Test logging
Is there some other tool/package that does support DELL virtual disks?
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:
I just bought two 320GB SATA drives and would like to install F11 with software RAID 1 on them. I read an article which explains how to install RAID 1, but it used 3 disks: one for OS and two clones. Do I really need a third disk to install RAID 1 configuration? If 2 disks is enough, then should I select "Clone a drive to create a RAID device" during F11 installation as explained here?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
When in YaST/System/Partitioner I try to create RAID for my second HDD which is named "sdb".
My original "sda" hdd is partitioned as follows: dev/sda, disk type dev/sda1, linux swap, swap, swap dev/sda2, linux native, ext3, / dev/sda3, linux native, ext3, /home
When trying to partition second "sdb" hdd I go to "/dev/sdb" under "Hard Disks", then create partition on "/dev/sdb" ... new partition/maximum size, press next button, do not format partition/0xFD linux raid/do not mount partition. After that "swap" partition of the whole size of disk gets created. However when trying "raid" in system/partitioner/add raid I end up with ("ERROR" there are not enough suitable unused devices to create raid).
I have a small installation problem with openSUSE 11.2 I have read the relevant stuff in the forums for new users. I have downloaded openSUSE-11.2-DVD-i586-iso and verified the file using MD5. I am currently running windows XP (SP1) on a Fujitsu Siemens V1000 laptop andI am trying to install linux from my hard drive and not from a DVD drive.
The path to the downloaded file on my windows partition is: e:/OpenSuse/openSUSE-11.2-DVD-i586.iso I have initialised installation and selected sda6 as the partition where the sourcefile is located. and when prompted for the directory, I typed " /OpenSuse/openSUSE-11.2DVD-i586.iso " The installations then proceeded through to "Initialising Package Manager" upon which I get the following message:
" Unable to create repository from url' iso:/? iso=openSUSE-11.2-DVD-i586.iso&url=hd:/OpenSuse?device=/dev/sda6"
DETAILS: Failed to mount iso:///?iso=openSUSE-11.2-DVD-i586.iso & url=hd:/OpenSuse?device=dev/sda6 on: Unable to find iso filename on source media.
History: - File:/openSUSE-11.2-DVD-i586.iso not found on medium 'hd:///OpenSuse?device=dev/sda6'
There is an option to try again at which point an additional dialog box appears and I have the opportunity to change things. Other than that, the only option is to cancel and terminate the installation. I've tried the process around 3 times, but if I'm doing the same thing then not surprising I'm getting the same result.
I want to install SLES11 with my usb key but I couldn't to make it bootable. Because my server are only a cdrom drive, not dvd... I read the following document : SuSE install from USB drive - openSUSE but it doesn't run! So I can mount/umount the usb and linux could see him:
mount /dev/sda /mnt/usb
but the fsck.vfat doesn't run :
fsck.vfat /dev/sda1 open /dev/sda1:No such file or directory
I am trying to add a few additional repositories to SLES 11 64-bit but I can't seem to get them added. I am trying to add Index of /repositories/server:/database but I get the error "Unable to create repository from [URL]... Try again? So I browsed that repository and then tried to add Index of /repositories/server:/database/SLE_11 but still got the error message with this new URL listed instead. I tried just the 11.2 repository and it worked - Index of /distribution/11.2/repo/oss/suse.
How or what do I need to do in order to get Index of /repositories/server:/database added to my repository?
I'm new here, an Ubuntu user who would try Opensuse for a while. That is if I'm able to launch the thing ! I'd like to create a Live Usb Stick to test it and install it if I like it but it doesn't seem to work.
I tried the website method, using "Win32DiskImager.exe" but the program doesn't work for me (WinXP) : it looks like it's writing but when the "Done" message is prompted, I'm unable to access the usb key, Windows says it's not formatted. That doesn't look right... I tried with LinuxLive Usb Creator but the boot process fails and Universal-Usb-Installer doesn't offer an Opensuse option.
Is there another way to install the distribution on an USB stick ? I could still try through Ubuntu but that would be quite surrealistic.
I am attempting to use SuseStudio "preload" and "live" .iso images and the instructions at SDB:Live USB stick - openSUSE to create a bootable USB drive.dd_rescue ends after the expected amount of time--without reporting errors--yet "fdisk -l" reports that the device does not contain a valid partition table. The USB key does not boot and appears to have no data.
For what it's worth, my build host is Gentoo on 2.6.39, and I have the latest dd-rescue available. The USB disk is a PNY attache 4GB and works fine with any ISO's supported by unetbootin. The only other bare metal I have here is a netbook running Mepix, so I can't make use of solutions that start with "Boot your existing OpenSuse workstation
I have 10.04.1 on my server with a 250gb sata drive. I have all my files on this hard drive. I'm running out of space so I have another 250gb sata drive I need installed. I want to create raid 0 so I can expand my servers hard drive space. I don't want to lose my data on original hard drive or reinstall to create the raid. Is there a way to achieve this with mdadm without altering the first hard drives data?
I have built a couple RAID's, but I'm uncertain of how I should format the partitions of the raid. Should I format partitions on each disk, and then add them to a raid, or should I create a raid on unformated disks and then format the raid as a partition? Does it matter, and are there performance/reliability issues? I'm creating a RAID-5 using 3 SATA disks on RHEL for user data area.
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I have implemented LVM to expand the /home partition. I would like to add 2 more disks to the system and use raid 5 for those two disks plus the disk used for /home. Is this possible? If so, do I use type fd for the two new disks and use type 8e for the existing LVM /home disk? Or do I use type fd for all of the raid disks?
I use slackware 13.1 and I want to create a RAID level 5 with 3 disks. Should I use entire device or a partition? What the advantages and disadvantages of each case? If a use the entire device, should I create any partition on it or leave all space as free?
I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.
Code: aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0 mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.
I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.
However, mdadm man page states: "Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."
So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.
RAID CONFIGURATION: ~# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Fri Aug 4 17:38:26 2006 Raid Level : raid1 Array Size : 238950720 (227.88 GiB 244.69 GB)
I wanted to merge my 1TB disks into and RAID 5 array, 4 of them in RAID 5 is above 2Terabytes limit of msdos partition tables which grub2 can boot from, so I decided to start up the system from scratch, by building it on GPT partitions, but seems grub2 won't boot from GPT partition because it drops to grub rescue and I can't really do anything from there.
I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.
I ran smartct -l on all the drives in question and they're all in good working order.
Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.
Here is what the "good" drive looks like when i use sfdisk:
sudo sfdisk -l /dev/sda Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 0+ 121600 121601- 976760001 83 Linux /dev/sda2 0 - 0 0 0 Empty
I would like to mirror some of my partitions (i.e. /home) with a software RAID. My primary harddisk is a Western Digital WD20EARS with Advanced Factor (4kB Sectors). The secondary disk is a Samsung HD103UJ with old sector-style (512B). Is it possible to set up a RAID 1 containing a partition with advanced factor an a partition with old sector-style or do both partitions have do be in the same sctor-style?
I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.
Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.
I'm trying to to determine the speed of my Raid Hot swappable disks. I need to determine if each disk is ether 10,000 rpm or 15000 rpm. I know that each disk is 72GB in size: I have tried to find this information ind/proc/diskinfo and using dmesg but no luck.
I have a home server running Openfiler 2.3 x64 with 4x1.5TB software RAID 5 array (more details on the hardware and OS later). All was working well for two years until several weeks ago, the array failed with two faulty disks at the same time. Well, those thing could happen, especially if one is using desktop-grade disks instead of enterprise-grade ones (way too expensive for a home server). Since is was most likely a false positive, I've reassembled the array:
# mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 mdadm: forcing event count in /dev/sdb1(0) from 110 upto 122 mdadm: forcing event count in /dev/sdc1(1) from 110 upto 122
Right. Once is just a coincident but twice in such a sort period of time means that something is wrong. I've reassembled the array and again, all the files were intact. But now was the time to think seriously about backing up my array, so I've ordered a 2TB external disk and in the meantime kept the server off. When I got the external drive, I hooked it up to my Windows desktop, turned on the server and started copying the files. After about 10 minutes two drives failed again. I've reassembled, rebooted and started copying again, but after a few MBs, the copy process reported a problem - the files were unavailable. A few retried and the process resumed, but a few MBs later it had to stop again, for the same reason. Several more stops like those and two disks failed again. Looking at the /var/log/messages file, I found a lot of error like these:
Apr 12 22:44:02 NAS kernel: [77047.467686] ata1.00: configured for UDMA/33 Apr 12 22:44:02 NAS kernel: [77047.523714] ata1.01: configured for UDMA/133 Apr 12 22:44:02 NAS kernel: [77047.523727] ata1: EH complete
The motherboard is Gigabyte GA-G31M-ES2L based on Intel's G31 chipset, the 4 disks are Seagate 7200.11 (with a version of a firmware that doesn't cause frequent data corruption).
Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/" I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think. After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.
Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ). The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.
I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I read devices it syncs the disks and after about 3 hours I see fine status in mdstat. However when I reboot, it again starts with degraded array. I get a feeling that after I read the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.
I have been running a server with an increasingly large md array and always been plagued with intermittent disk faults. For a long time, I've attributed those to either temperature or power glitches. I had just embarked on a quest to a) lower case and drive temperature. They were running between 43 and 47C, sometimes peaking at 52C, so I've added more case fan power and made sure the drive cage was in the flow (it has it's own fan, too). Also, I've upgraded my power supply and made very sure that all the connectors are good. The array currently is a RAID6 with 5 Seagate 1,5TB drives.
When everything seemed to be working fine, I looked at my SMART logs and found that two of my drives (both well over 14000 operating hours) were showing uncorrectible bad blocks. Since it's RAID6, I figured, I couldn't do much harm, ran a badblocks test on it, zeroed the blocks that were reported bad, figuring the drive defect management would remap them to a good part of the disk and zeroed the superblock. I then added it back to the pack and the resync started. At around 50%, a second drive decided to go and shortly thereafter a third. Now, with two out of five drives, RAID6 will fail. Fine. At least, no data will be written to it anymore, however, now I cannot reassemble the array anymore.
Whenever I try I get this: Code: mdadm --assemble --scan mdadm: /dev/md1 assembled from 2 drives and 2 spares - not enough to start the array
Which is not fine. I'm sure that three devices are fine (normally, a failed device would just rejoin the array, skipping most of the resync by way of the bitmap) so I should be able to reassemble the array with the two good ones and the one that failed last, then add the one that failed during the resync and finally re-add the original offender. However, I have no idea how to get them out of the "(S)" state.
I am planning in near future(about a month or two) to build a new computer. There be a least two phisical disks - one for OS an another for data or backup. One disk to be a small(60-80 GB) SSD, another to be a HDD of 1 TB or bigger.
I have a few questions:
Is Linux(openSUSE in my case) work with SSD? It is possible to install Linux on two disks - in my case /swap, /boot and /root partition to be on SSD and /home partition on larger HDD?