I am working on:
SUSE Linux Enterprise Server 10 (ia64)
VERSION = 10
PATCHLEVEL = 2
Example 1: If Raid is enabled on disk then the information in cat /proc/partitions is shown as :
104 0 35532720 cciss/c0d0
104 1 2104483 cciss/c0d0p1
104 2 33423232 cciss/c0d0p2
104 16 35532720 cciss/c0d1
104 32 35532720 cciss/c0d2
Example2: If normal disk is taken the output is as follows:
8 0 35566480 sda
8 1 102383 sda1
8 2 409600 sda2
8 3 210925 sda3
8 4 2104515 sda4
8 5 32739023 sda5
According to my application I need to find out the information of the disk. How can I get disk information when my system is RAID enabled(cat /proc/partitions shows the entry of the controllers in example 1). Can we get Partitions information of the Disk when Disk are connected to system by controller?
I just assembled a PC amd x3 64bit 4 GB RAM ATI HD4350 Asrock MB 2 HD WD 320 put into RAID 0.
To make this configuration, I have not enabled the raid from bios (since Ubuntu gave me problems) but I followed this guide [URL] (Official guides) to configure software RAID offered by ubuntu. Everything went ok and it works perfectly but when released the 10.04 and the system asks me to update, you think there will be some problem or leaving will be updated throughout unchanged RAID configuration?
my Fedora 11 system is not starting anylonger. It stops with the message:
Code:
VFS: Can't find ext4 filesystem on dev dm-0
The system told me since a while, that a lot of the sectors of one disk of the (software) RAID compound are failed already. So tried to disconnect each of the disks and start them separately. Unfortunaltly this is not working (for one its is not working at all, the other wents the same far as with both), when I tried to recover the system with the Fedora DVD, it said no distribution found. I am quite new and do not know so much about linux system, so i do not know what further information you could need. Maybe it can be important, that both disks are encryped (the system wents so far, that I can type in the password).
I have an SiI hardware SATA RAID card, with two 500GB disks in mirrored RAID configuration. When I first plugged them in and set it up, things seemed to work ok, but on boot the raid controller told me that the RAID needed rebuilding, and it would happen automatically after POST. So I didn't worry about it, and the drive mounted fine, and it's been that way for years. I just went in and manually on-line rebuilt the RAID in the controller's BIOS, and now when I boot into Ubuntu, both disks show up in fdisk, but neither show up in /dev/disk/by-uuid. Am I missing something?
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b* cp -a -d -R -v -t /media/raid_array /d* cp -a -d -R -v -t /media/raid_array /e* cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
I need information about the capabilities and state of the disk...specifically, if it's rewritable (capability) and if it's currently blank (state). Currently, I'm using the output of hal-device to give me that information. The problem is, hald doesn't update the information in response to a burn event... this is a problem, because if a burn a blank disc, hal-device still lists it as blank...and if I blank a disk, hal still lists it as non-blank.
Is there a method to force HAL to update a device, or is there a way to query the disk/drive directly to find out if the disk is blank and rewritable? Or even a method to determine if a disk is blank or not? Additionally, this is for a CD-burning application... so I'm intentionally trying to avoid anything hacky like ejecting the CD and reinserting it... anything of that nature.
I am trying to get my head around my new server. I am using CENTOS 5.4 x86_64 with 300GB harddrive.
The 300 GB been partitioned with the following:
Device Size Used Available Percent Used Mount Point /dev/md0 99M 18M 77M 19% /boot /dev/md1 16G 8.7G 5.8G 61% / /dev/md2 246G 40G 194G 18% /home /dev/md3 4.8G 1.6G 3.0G 35% /var /usr/tmpDSK 3.9G 432M 3.3G 12% /tmp
I have increased teh tmpDSK as it was getting full very quickly. My question is, what are these md0; md1, md2 and md3 are they harddrive partitions and as md1 is getting full will that have an impact on my sites.
I need to write a script to report useful information on disk utilization for each user's home directory.For each directory I need to show: 1. the long listing of that directory entry (but not the files in the directory), so that I can see the rights and owners of the directory.2. The amount of disk used by that directory, in human-readable format, including subdirectories. I need to have two lines for each user one after the other. For example:
/home/user1 directory info /home/user1 disk usage /home/user2 directory info /home/user2 disk usage
The script will assume that all users, except user root, have their home directories in the /home directory (no need to do anything with the /etc/passwd file). And if the administrator adds or removes users, the script should still work correctly (so the script shows the information for all current users).
Here's what I do know. The command "ls -ld /home/user's_name" will give me the info I need for #1. And the command "du -hs" will give me the info I need for #2. What I don't know is how to grab each individual directory in order to apply the above commands to each of them in order. ???
I have a hypothetical situation in which I installed my operating system using a RAID1 mirror. At some point I decided that this setup was overkill, my machine isn't system critical, I value doubling my storage space more than speedy recovery, I'm doing routine backups, etc...
Short of backing up my system volume and repartitioning, or otherwise starting over, is there a way I can reconfigure my RAID1 array to only expect one disk so that mdadm no longer reports a Degraded state?
point me in the direction to get a step by step guide to setting up a Raid 5 using the Disk Utility and 3 spare drives? I have the main OS files on a 80gig drive and I would like to mount the 3 drives as Raid 5.Just shooting in the dark now.. Screen shot is attached. [URL]...
Has anybody ever used Disk Utility to set up software RAID? Here I am running terminal commands (I'm a terminal junkie) and I just happen to stumble across instructions that indicate "Or you can just set it up through Disk Utility."
Sure enough in disk utility, it looks like all of the configurable options are there. It makes me wonder, though... is this kind of GUI functionality something that isn't really solid? Or does it operate predictably and effectively?
I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right? Hardware: Intel ICH10R First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.
Step 1: Create RAID Device Code: mdadm --create --verbose /dev/md0 --metadata 1.2 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1 I read that "--metadata 1.2" is the best option. It is true? Create filesystem on the RAID device
Using this method of calculation: * chunk size = 128kB (for RAID 5) * block size = 4kB (recommended for large files, and most of time) * stride = chunk / block = 128kB / 4k = 32kB * stripe-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (5)- 1 ) = 32kB * 4 = 128kb Then: Code: mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=128 /dev/md0
Step 2: Add spare-disk Code: mdadm --add /dev/md0 /dev/sdd1 Is this enough?
I have a IBM x306 server with on board RAID1 controller. One of the two disk is dead, I need to replace it. I'm using the "ServRaid Manager" to reconfigure the array. I shut down the server and replaced the dead disk with new one, I first started the server and not doing anything manually - thinking that it will auto sync, but it just didn't! I then re-started the server and logged in with ServRaid Manager. Now I want to reconfigure the array so that the new disk will sync. But the problem is that in this arrays property, the "Synchronize" option is grayed out, I just cant click on it! On the other hand on the newly inserted disk's property I was able to click "re-build", but after this, things just stopped there, its not synchronizing. How do I simply replace this new disk and have things working?
As you can see they now show up as inactive. And for some reason sdi1 and sdh1 are not even listed. What can I do to get them back? To make matters worse I placed some important data on them, and even if I was clever enough to keep an extra copy on another drive, guess which drive that was? So, I need to get them activated as is (at least so I can get the data of them) before I can rebuild them from scratch. I'm running Mandriva 2010.1 and rated tehm using the built in disk partitioner.
I have a system with data stored in multiple disk arrays. I have to come up with a solution that will maintain the disk order of the arrays whenever a stripe fails, is removed and then put back in. One solution I came up with was to stamp every stripe with the disk array it belongs to along with its stripe id. I plan to put this stamp in the last 512 KB of each disk. And I maintain all this information in a sqlite database, that is disk array, stripe id, the software diskname, etc. So that whenever a disk is replaced, its stamp could be read and the corresponding entries in the database are updated.
Following scenario: My server in some data center on a different continent with two disks and software raid 1.
One day I see that a disk failed (for example with /proc/mdstat). Of course I should replace the failed disk asap. Now that I think about it, I am not sure how. What should my email to the data center support guy mention to make sure that guy doesn't replace the wrong disk?
With hardware RAID it is very easy, because the controller usually has some kind of red LED indicator. But what about software raid?
What i need is to mount several directories from any other partiton (or file system) as a new merge file system that can grow or decrease depending on the free space. As if it was a dinamic RAID,so i can work with huge files distributed over the partitions mounted.
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
Purchased (4) 2TiB Drives (actual disk space) and created a RAID5 array expecting to have 6TB of useable disk space, however actual useable space is 5.46TiB.
So, the question is where did the disk space go?
First off, I can say for certainty the disks actual useable is verified at 2TB each have mounted and formated on a non-linux system (OSX).
Disks - 2TB Per disk, Tested HFS, Actual 2TB Useable root@server:/server# fdisk -l 2>/dev/null | egrep "sd[hijk]" | grep Disk Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes
I newly installed debian squeeze with software raid. The way I did was, as also given in this thread.
- I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap) - I selected the hard drive and created a new partition table - I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable. - Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /. - Created another partion and used for swap
Then RAID configuration: Through Configure RAID menu -> create MD device -> (2 for the number of drives, 0 for spare devices) Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot) Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /) And no RAID for swap partitions
'Finish Partitioning and write changes to disk' --> Finish the rest of the install like normal. Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok.
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d] mdadm: no recogniseable superblock on /dev/sdb mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
i am currently trying to do software raid 1 on a running ubuntu 9.10 system with mdadm. I might have done something wrong and im trying to go back from the beginning. Does anyone know how to remove all the raid info from a harddisk and get it back to its original state.
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code: /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk) /dev/sde (new 2tb disk - unused) /dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?