Slackware :: How To Create Raid Arrays With Mkfs.btrfs
Jun 21, 2011
what do I have:2x 150GB drives (sda) on a raid card (raid 1)for the OS (slack 13.37)2x 2TB drives (sdb) on that same raid card (raid 1, too)2x 1.5TB drives (sdc,sdd) directly attached to MoBo2x 750GB drives (sde,sdf) attached to MoBo too.if i got about it the normal way, i'd create softRAID 1 out of the the 1.5TB and the 750GB drives and LVM all the data arrays (2TB+1.5TB+750GB) to get a unified disk.If I use btrfs will I be able to do the same? mean I have read how to create raid arrays with mkfs.btrfs and that some lvm capability is incorporated in the filesystem. but will it understand what I want it to do, if i just say
Code:
mkfs.btrfs /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
probably not, eh?
View 3 Replies
ADVERTISEMENT
Jun 10, 2011
With mdadm was only able to add a new drive to the using the --force function. I do not feel comfortable with using the function that way though. When I remove a disk in VMWare, it perfectly says that the drive is lost and the array is degraded (mdadm --detail /dev/md0). Although after re-adding the drive, it immediately shows device as busy for both mdadm and sfdisk when I don't use --force.
Recovery and repairation of degraded array worked fine with sfdisk --force, mdadm --add --force, it automatically started recovering and took not so long. What are best practises to manage software raid-1 arrays?
View 1 Replies
View Related
Apr 15, 2011
Have a customer who is due for a new system. AS they just renewed their RHEL entitlement, they plan on ordering a Dell server without a OS preload. Two questions:
- Will RH let them download RHEL6 just by maintaining the entitlement when their current version is RHEL 3?
- The server will have two RAID arrays - one intended for /home, one for "everything else". As I've never done a clean load with two arrays, how do I select what file systems go on which array?
View 3 Replies
View Related
Jan 3, 2010
I am in a situation where I am stuck with a LVM cleanup process. Although I know a lot about AIX LVM , but this is first time I am working with Linux LVM2. Problem is that I created two RAID arrays on storage, which appeared as mpath0 & mpath1 devices (multipath) on RHEL. I created logical volumes and volume groups and every thing was fine till I decided to clean the storage arrays and ran following script:
#!/bin/sh
cat /scripts/numbers | while read numbers
do
lvremove -f /dev/vg$numbers/lv_vg$numbers
vgremove -f vg$numbers
pvremove -f /dev/mapper/mpath$numbersp1
done
Please note that numbers was a file in same directory, having numbers 1 and 2 in separate line. Scripts worked well and i was able to delete definitions properly (however I now think I missed one parted command to remove the partition definition from mpath device. When I created three new arrays, I got devices from mpath2 to mpath5 on linux and then I created vg0 to vg2. By mistake, I ran above script again for cleanup purpose and now I got following error message
Cant remove physical volume /dev/mapper/mpath2p1 of volume group vg0 without ff[/B]
Now after doing mind search, I now realize that I have messed up (particularly because mpath devices did not map in sequence to vg devices and mapping was like mpath2 --- to ---- vg0 and onwards). Now how I can cleanup the lvm definitions? should i go for pvremove -ff flag or investigate further? I am not concerned about data, I just want to cleanup these pv/vg/lv/mpath definations so that lvm can be cleaned up properly and I can start over with new raid arrays from storage?
View 1 Replies
View Related
Feb 1, 2010
The mkfs.msdos symlink seems to be missing in Slackware 13.0 32-bits. However it is there in 64-bits. This: [URL]...e_is_mkfs.vfat
View 2 Replies
View Related
Jan 9, 2011
I'm trying to setup a RAID 5 array of 3x2TB drives and noticed that, besides having a faulty drive listed, I keep getting what looks like two separate arrays defined. I've setup the array using the following :
sudo mdadm --create /dev/md01 --verbose --chunk=64 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sde
So I've defined it as md01, or so I think. However, looking in the Disk Utility the array is listed as md1 (degraded) instead. Sure enough I get :cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sde[3](F) sdc[1] sdb[0]
3907028992 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
So I tried getting info from mdadm on both md01 and md1 :user@al9000:~$ sudo mdadm --detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Sun Jan 9 10:51:21 2011
Raid Level : raid5 ......
Is this normal? I've tried using mdadm to --stop then --remove both arrays and then start from scratch but I end up in the same place. I'm just getting my feet wet with this so perhaps I'm missing some fundamentals here. I think the drive fault is a separate issue, strange since the Disk Utility says the drive is healthy and I'm running the self test now. Perhaps a bad cable is my next check...
View 3 Replies
View Related
Mar 30, 2010
I have created software raid 5 configurations on the second harddrive its working fine and i have edited fstab file for auto mounting when it reboot but when i reboot the computer raid doesn't work i have to re-create the arrays by typing "mdadm --create" command again and mount again manually ,is there anywhere i can do this once without retyping the commands again after rebooting and i am also using redhat 5
View 1 Replies
View Related
May 29, 2010
I just did a fresh install of slack 13.1 on a separate drive to the one I was previously using. I've been having trouble getting lilo to work, so that I can choose between either drive. Lilo is currently installed to /dev/sda, with the old system on /dev/sda1 and the new installation on /dev/sdb1. I keep getting errors like these:
Code: Fatal: Trying to map files from unnamed device 0x0011 (NFS/RAID mirror down ?)
I managed to install lilo from the old system by copying the kernel image from the new system into the /boot/ directory and running lilo. I am now on the new system and trying the same thing in reverse but it isn't working. I have searched around a bit and there's a lot of talk of chroot-ing into the other partition to run lilo. I don't understand why the process isn't working both ways though. I can't run lilo on my new installation even with the two kernel images in the local /boot/ folder. Is this something to do with btrfs or am I missing something to do with lilo? This is my lilo.conf file. I am trying to run lilo using this file from my new installation on /dev/sdb1 and getting the error given above.
[Code]...
View 13 Replies
View Related
Feb 1, 2011
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
View 4 Replies
View Related
Jan 26, 2011
how to create an initrd image using cpio, instead of mkfs?
Now im doing it like this:
Code:
dd if=/dev/zero of=initrd bs=1024 count=10000
mkfs -t ext2 -F -m 0 -b 1024 -i 1024 initrd
But i would like to move to cpio, because with dd, if you add something new, you might need to change the count. Also cpio is used in distro's like Fedora and Ubuntu.
View 2 Replies
View Related
Jun 5, 2010
I have never preformed a rebuild of an RAID array. I am collecting resources, which details how to build an RAID 5 array when one drive has failed. Does the BIOS on the RAID controller card start to rebuild the data on the new drive once it is installed?
View 4 Replies
View Related
Mar 30, 2010
i want to remove the raid 1 arrays on our server centos and use standalone drive
View 3 Replies
View Related
Jun 10, 2011
With mdadm was only able to add a new drive to the using the --force function. I do not feel comfortable with using the function that way though.
When I remove a disk in VMWare, it perfectly says that the drive is lost and the array is degraded (mdadm --detail /dev/md0). Although after re-adding the drive, it immediately shows device as busy for both mdadm and sfdisk when I don't use --force.
Recovery and repairation of degraded array worked fine with sfdisk --force, mdadm --add --force, it automatically started recovering and took not so long.
What are best practises to manage software raid-1 arrays?
View 1 Replies
View Related
May 13, 2010
I have two SAS RAID controller cards in a Dell server in slots 2 & 3, both with an array hanging off them. I went to install a third card into slot 1, but then when it boots it says two of my sd's have bad magic number in the super-block and it wants me to create an alternative one, which I don't want to do. If i remove the new card, the server boots perfectly like it did before I added the new card. Is the new card trying to control stuff that isn't hooked up to it because its in slot 1, so its confusing RHEL?
View 5 Replies
View Related
Jul 6, 2010
So I have a system that is about 6 years old running Redhat 7.2 that is supporting a very old app that cannot be replaced at the moment. The jbod has 7 Raid1 arrays in it, 6 of which are for database storage and another for the OS storage. We've recently run into some bad slowdowns and drive failures causing nearly a week in downtime. Apparently none of the people involved, including the so-called hardware experts could really shed any light on the matter. Out of curiosity I ran iostat one day for a while and saw numbers similar to below:
[Code]...
Some of these kinda weird me out, especially the disk utilization and the corresponding low data transfer. I'm not a disk IO expert so if there are any gurus out there willing to help explain what it is I'm seeing here. As a side note, the system is back up and running it just runs sluggish and neither the database folks nor the hardware guys can make heads or tails of it. Ive sent them the same graphs from iostat but so far no response.
View 1 Replies
View Related
Jul 12, 2009
I'm trying to figure how to create an small array of 3x3 arrays such I can do
Code:
SUM1(i) = SUM1(i) + SUM2
where i goes from 1 to 8 and SUM2 is a 3x3 array.
View 2 Replies
View Related
Sep 15, 2010
I'm trying to format a device into XFS filesystem using mkfs command. Suppose I have a /dev/sda1 device with 4096 block and I want to format the whole thing, how can I execute the command? I keep getting various errors while executing it.
View 7 Replies
View Related
Mar 20, 2011
I was interested in the idea of the btrfs subvolumes, so I made a virtual machine and installed Slackware as per the instructions here: [URL] It all went very well, but when I tried to switch from the huge kernel to the generic kernel and use the initrd.gz generated from step 29 (except that I used 2.6.37.4-smp instead of whatever's there) in lilo.conf, it failed to boot. I also noticed that in the instructions themselves, the poster doesn't actually add the initrd.gz to lilo.conf, so I'm guess the huge kernel has everything it needs to boot properly.
View 13 Replies
View Related
Feb 28, 2011
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
View 1 Replies
View Related
Jul 18, 2009
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
View 3 Replies
View Related
Oct 31, 2010
I've never worked with a software raid before, and I was a little worried about getting the hand of it before actually relying on it. So what I decided to do is to create a raid 5 on my Fedora 12 installation, and then install fedora 14 and see if the raid 5 was still in tact on the OTHER 4 disks that were NOT a part of the OS drive. After installing Fedora 14 I noticed that the raid5 was broken, and I googled for hours and even started a thread here, but could not get it working again.
I decided to start from scratch, and just delete all of the partitions and the raid, and do a new raid5 using 4 disks that are 2tb each. I googled for a long time, and tried using both the cli (mdadm) and the gui (disk utility), but I was not able to successfully get a new raid5 working? I've tried various examples, to no avail. Any link to a difinitive resource on how to setup a raid5? I don't just mean the commands for mdadm, but also the actual disk formatting that's required before setting up a raid, if it's even required. I say thing for 2 reasons:
1) In fedora 12, using disk-utility, I didn't have to format the drives first, or anything else. I just selected 4 drives, said to make them a raid 5, and then formatted them as ext4. But many examples I've seen says that the drives must already be formatted, and you use partition #1 of each drive to setup the raid, whereas when I did it, I didn't have any partitions (they were new drives), and I could format after.
2) One example I've found says to use ext*, whereas another example says to use xfs. I don't feel this matters, and is only a matter of choice/taste, right?
View 14 Replies
View Related
Apr 22, 2010
how to create RAID-1 in rhel-5
View 2 Replies
View Related
Jan 11, 2011
I'm using a Intel mac pro running Ubuntu Server 10.4 64bit, and I have it working. Currently, I'm just using Ubuntu using bootcamp, and not using EFI. The OS drive is 250gb which I know is more than enough, but it was what I had free at the time. I added 3 1tb drives to the computer, but not sure how to create a raid with them. I've done some searching, but still haven't been able to get it done successfully.
View 3 Replies
View Related
Aug 7, 2009
I have three hard drives in my computer That I want to make RAID 0. All of them already have partitions and data on them. What I want to know is if I can, without losing data, add the disks to RAID and then merge the partitions? All the partitions are of the same type. Or would it easier/better/possible to do this with LVM? Even if I'd have to shrink partitions and copy data to a new LVM one to get it set up properly, would it be better than RAID 0?
View 2 Replies
View Related
Jan 21, 2010
We can create normal raid levels in centos using mdadm, but how can we create nested raid levels ( example raid 1+0 Raid 0+1).
View 2 Replies
View Related
Mar 10, 2010
I have a raid 1 array which I did setup on a Centos box. I can configure the array fine the problem is every time is restart my machine I can no longer see the array and have to go create it all over again. I tried doing a few searches on it but have came up with nothing so far.
View 19 Replies
View Related
Jan 1, 2010
I have a software RAID array using mdraid that consists of two 1.5TB drives that I use for storage, the array is mounted at /Storage. I am running out of space in the array so I ordered two more 1.5TB drives to create a 4 drive RAID 1+0 array which will be 3TB big. My question is how do I create the new array and not lose any data?
The drives and partitions are sdc1, sdd1, and soon to be sde1, sdf1. I currently have 4 RAID arrays (md0,md1,md2,md3). I think I can create the RAID 1+0 array with the two new drives, copy the data from my current array to the new one, remove the old array, then add the two original drives to the new array. But I wanted to ask on here first to make sure my data doesn't go poof.
View 1 Replies
View Related
Jun 25, 2010
I've got 3 extra disks on OpenSuse 11.2 - all the same size. I've created a partition on all of them as type 0xFD. If I then try and add raid in yast I get "There are not enough suitable unused devices to create a RAID."
View 9 Replies
View Related
Nov 17, 2010
I have 10.04.1 on my server with a 250gb sata drive. I have all my files on this hard drive. I'm running out of space so I have another 250gb sata drive I need installed. I want to create raid 0 so I can expand my servers hard drive space. I don't want to lose my data on original hard drive or reinstall to create the raid. Is there a way to achieve this with mdadm without altering the first hard drives data?
View 2 Replies
View Related
May 10, 2011
I have a server that has one drive with Ubuntu already loaded on it. I would like add another drive and then create a mirrored RAID between the two.
View 4 Replies
View Related