Server :: Get Summary Overview Of All The Md-devices At Once With Mdadm
May 13, 2010
I'm performing some experiments on a box with 5x36,4G disks. Right now I have partitioned all the drives in chunks of 1G, and put them together 3 by 3 in raid5. (Yeah, there's a lot of md-devices..). 'mdadm --query' and 'mdadm --detail' give summary/detailed information about the md's, on condition that I specify the device.
My question is if there would be a way to have a summary overview of all the md-devices at once. I've gone through the manual, but haven't found anything which resembles a useful command.
I have a problem with my mdadm RAID. I wanted to know if anyone had any experience with shrinking RAID5 arrays. I was growing the array from 5 to 6 devices however the grow got interrupted and it has recovered to 5 drives. The 6th drive is toast and I am unable to re add it to the system. I would like to drive the device listed as "removed". I have tried mdadm /dev/md0 --remove detached and failed with no success. I am running Ubuntu kernel 2.6.28-11 and mdadm is v3.1.1.
Here is output of a "mdadm -D dev/md0" /dev/md0: Version : 0.90 Creation Time : Wed Jan 12 00:46:41 2009 Raid Level : raid5 Array Size : 4883812480 (4657.57 GiB 5001.02 GB) Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Raid Devices : 6 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Feb 15 20:25:07 2010 State : active, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K
UUID : 74fa5199:84b88e81:4ae0fbae:92643084 Events : 0.1331010 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 0 3 active sync /dev/sda 4 8 64 4 active sync /dev/sde 5 0 0 5 removed
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
I have a squid proxy server (which I am very new too) which all traffic from my office goes through. The proxy itself is working fine, but I can not get logwatch to email me a daily summary. logrotate seems to be throwing an error:
# logrotate /etc/logrotate.conf error: squid:1 duplicate log entry for /var/log/squid/access.log
My /etc/logrotate.d/squid file is below... My access logs are in /logs/squid not in /var/log/squid.
Assume that I've got a folder called "Stuffs". So in fedora 15 you don't have a desktop folder, I mean you can't use it like we were used to. Even if you activate the option that allows you to use the "desktop", it is very uncomfortable because you can't minimize windows quickly. So my question is: Is there a way of put a "Link" in the overview, so I can get in to the overview and write "Stuffs" in the search bar and get my "Stuffs" folder open ?
how to wrap the text in the overview for the icons I tell you I love opensource. Straight from some of the guys who wrote gnome shell, and yes the reason it was left out by default is because it has some quirks. Yes it wraps text - however if you have long words - like this_file_has_no_spaces then it will overflow into the icon on either side of it. But most application names have spaces. The obvious place you will see this happening is in recent items where that displays system files etc. The other issue is that when you hover/highlight an icon it will only highlight the first line of text. That's pretty much it and with a few lines of code then you can increase the font size of applications and don't have to worry about names getting cut off
120GB Sata HDD - Primary OS drive 3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)
I cant seem to get my RAID 5 (consisting of 8 1tb hard drives) assembled for some reason and I have no idea why and cant find any solutions online. Ill go ahead and show what my problem is:
here is all my hard drives:
Code: server:~$ sudo fdisk -l Disk /dev/sda: 10.2 GB, 10242892800 bytes 255 heads, 63 sectors/track, 1245 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0004f041
So as you can see the array for those last four look fine however for the first four it marks the last four drives as faulty for some reason. I am kind of clueless to do from this point on honestly, I have data on this array that I'd really like to save.
I have a machine that supports 4 S-ATA drives. I have all identical drives in each slot. I am asking for someone to please tell me how I can create a RAID5 array on Linux and then also have the 4th drive (/dev/sdd) as a hot spare for any of the three drives in the array / volume?I did the following:
/device = type @ parition size /dev/sda1 = fd @ 100 MB (bootable for /boot) /dev/sda2 = fd @ 1 GB (Going to be used for Swap)
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3 debian squeeze x64 installed: root@xen2:~# uname -a Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
My only goal is to have a raid-5 that auto-assembles and auto-mounts. Hardware: 4*2TB sata (raid disks), 1*500GB IDE (OS disk), 1*DVD IDE all plugged direct into the motherboard (nForce 750i SLI).
Starting partitions on the raid disks: gpt ext4 The problem occurs when I restart my comp after building it for the first time. I am able to see it assemble, I am able to partition it, I even mounted it Once.This is the second time I've built it so I have watched everything that happened. I don't know if this has anything to do with my problem, but when I created the raid my drive designations were: sda - 500GB(OS), sd[bcde] - 2TB(raid). When I restarted: sd[abcd] - 2TB(raid), sde - 500GB(OS).
My home-backup server, with 8*2TB disks won't boot anymore. Two disks failed at the same time and i rebuilt the raid 6 array without any problem, but now i can't boot the os. I'm using ubuntu server, 10.10. I've made screens of the displays to don't copy everything here. The problem at the boot:
And the Grub config: It's not a production server, but i would like to have it online. I've tried for the lasts 2 days (just a couple hours a day) but without success. I was suggested to do "mount -o remount,rw /" and than edit /etc/fstab, but it get the file don't exist error.
I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.
By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.
My main concern is will I be able to just swap the driving into the new system and everything can just pick up where it left off? More specifically, will mdadm be able to detect the 8x2TB drives attached to the new hardware and re-assemble the array?
My buddy that helped me set this system up isnt sure so I figured I ask here first, the boards do have the same ICH10R southbridge providing 6 of the SATA ports and 2 more will be run off of the extra controller onboard. I dont have a lot of Linux experience switching out core parts but in Windows Ive had great success moving things between various Intel chipsets and architectures from P965 -> P35 -> P45 -> H55 -> X58.
I have an Linksys NSLU2 with four usb harddrives attached to it. One is for the os, the other three are setup as a RAID5 array. Yes, I know the raid will be slow, but the server is only for storage and will realistically get accessed once or twice a week at the most. I want the drives to spin down but mdadm is doing something in the background to access them. An lsof on the raid device returns nothing at all. The drive are blinking non-stop and never spin down until I stop the raid. Then they all spin down nicely after the appropriate time.
They are Western Digital My Book Essentials and will spin down by themselves if there is no access. What can I shutdown in mdadm to get it to stop continually accessing the drives? Is it the sync mechanism in the software raid that is doing this? I tried setting the monitor to --scan -1 to get to check the device just once, but to no avail. I even went back and formatted the raid with ext2 thinking maybe the journaling had something to do with it. There are no files on the raid device, it's empty.
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
Server with 4 disk partitions in an RAID 5 array using md. Yesterday the array failed with two devices showing as faulty. After rebooting from rescue, I was able to force the assembly and start the array and everything looks to be okay as far as the data goes, but when I run: Code: mdadm --examine /dev/sda3
I get (truncating to the interesting bits): Code: Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 1448195216 (690.55 GiB 741.48 GB) Array Size : 4344585216 (2071.66 GiB 2224.43 GB) Used Dev Size : 1448195072 (690.55 GiB 741.48 GB) Array Slot : 0 (0, 1, 2, failed, 3) Array State : Uuuu 1 failed
And Code: mdadm --detail /dev/md1 yields: Code: Array Size : 2172292608 (2071.66 GiB 2224.43 GB) Used Dev Size : 1448195072 (1381.11 GiB 1482.95 GB) Raid Devices : 4 Total Devices : 4 Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0
Additionally, the 'slot' for devices a-d line up like this: a - 0 b - 1 c - 2 d - 4 (!) The first number 'Array Size' from examine is twice as big as sit should be based on the output from detail and comparing to twinned server, and why does the 'Array State' and 'Array Slot' from examine indicate there's a 5th device that's not indicated anywhere else?
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90
I am currently having problems with my RAID partition. First two disks were having trouble (sde, sdf). Through smartctl I noticed there were some bad blocks, so first I set them to fail, and readded them so that the RAID array will overwrite these. Since that didn't work, I went ahead and replaced the disks. The recovery process was slow and I left things running overnight. This morning I find out that another disk (sdb) has failed. Strangely enough the array has not become inactive.
Does anyone have any recommendations as the steps to take ahead with regards to recovery/fixing the problem? The disk is basically full so I haven't written anything to disk in the interim of this problem.
When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.
seems like I killed my softraid. First, my initial setup: 2x 1.5TB SATA disks -> raid1 with mdadm -> lvm -> multiple LVs as luks partition This worked for a while now, even though I made a bad mistake; I created the raid out of the whole disks (instead of creating partitons on them). I didnt notice because it worked...
Now one disk failed, but I could still access the other disk which I moved to another server. mdadm recognized it during boot, after vgscan --mknodes; vgchange -ay I saw the luks partitions in /dev/mapper/ and could mount them via luksOpen. Went well several times, I did not use the disk anymore to avoid killing it also.
Just today when I wanted to move the stuff to my new raid, this way wont work anymore
First of all, dmesg reports a wrong size (500G instead of 1.5T)
I have a raid 5 array that appears to have died. I was just routinely looking at /var/log/messages and noticed that a drive in the array was complaining (via SMARTD).
This is the /home directory, and so is backed up, so it's not critical, but I'd like to get some things that changed after the last backup (the week before I noticed the failure)
Let me start by outlining what I know :
It's a 2TB array spread over three disks (mdadm software RAID5), here are the drives:
MDADM gives the drives :
Now, the array *WAS* up ok, but I umounted it. (in which /dev/md0 was mounted to /home) Yes,I know - I didn't want any changes being made to the array by anything - at least that was my thinking at the time. In hindsight... I would have killed any processes, locked out the server, backed up again and 'then' unmounted it.
But we are where we are, I'm sure there'll be time for recriminations later.
When I try to remount it, I get :
Ok - looks like it's lost the type - it's normally worked, maybe we'll give it a little hint - it's ext3 with a journal.
When I tell it it's an ext3, I get :
Now, before I go charging off specifying superblocks further along the disk, but I can't remember where they're stored.
Neither can I recall what the blocksize I originally created the array as (I have a feeling I specifed 4K, but I could be wrong).
debugfs is only telling me :
I should also point out that this server is hosted, so it's 150 miles away from me at the moment, so I can't just whip them out and dd a copy.
I keep getting a bunch of these warnings and was wondering what they mean and if they are anything that I should be worried about. Code: JFFS2 warning: (1038) jffs2_sum_write_data: Summary too big (-32 data, -1305 pad) in eraseblock at 00400000
I have problem with Kontact since KDE 4.3.x (now I'm using 4.4.1). In earlier version (KDE 3) I had summary with calendar, holidays, notes and mail notifications in left column of summary and RSS (Akregator) channels in right column. I can't do the same thing with new Kontact - left column takes all summary window I don't have Akregator plugin in Summary configuration but Akregator is of course installed:
I use the following command to find files which are older than 60 days:
find /myfolder/* -mtime +60 > /myfolder/file.list
Now I want to find out the file size of all files contained in file.list in summary using the du command: less /myfolder/file.list | xargs du -sh The du command prints the file sizes of every single file but not the summary and this is actually what I want.
I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:
Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
Its from a Synology Box with 3 disks, which one is damaged. But this disk wasnt in use.Take a look on the raid-size of 493 GB - and the both available disks with 250GB..) On the others there were a linear raid. during this damaged disk the synology-device tells me, that the volume was crashed.But it look like, that this disk was not mounted into this volume.Quote:
DiskStation> mdadm --detail /dev/md2 /dev/md2: Version : 00.90