Debian Configuration :: Mixing Partitions In RAID 5
Mar 21, 2011
I have 2x 1.5TB hard disks and I'm going to buy a new 2TB drive soon. First though I just wanted to check that I could partition off the first 1/4 to 1/3 of the 2TB drive (leaving 1.5TB or more free) and install Debian to that part, then use the remainder of the disk in combination with the 2x 1.5 TB drives in RAID 5? i.e. can you mix whole drives and with partitions from other drives in RAID 5 and/or is it best to just stick with complete drives for the RAID array?I only have room for 3 drives in the small mATX case that houses my NAS device and I want to maximise storage capacity and minimise expense.
I currently have a mirror raid array with 2x 1TB HD's (Western Digital Caviar Green WD10EADS SATA). They are both the standard 512 byte sector HD's. I just ordered a new 1TB Hard Drive (Western Digital Caviar Green WD10EARS SATA) as I want to wipe the current mirror raid and use the 3 disks in a raid 5 array.
But to my suprise, the new 1TB HD is an "advanced format" HD with 4k byte sectors... Am I screwed or can this 4k byte sector HD be used in a software raid 5 array along with the 512 byte sector hard drives?
(The WD hard drive says you can jumper it to work with XP which can only use 512 byte sectors... there's also a utility you can run that does something to make it compatible with XP too.... Makes me wonder if these new drives are backwards compatible?)
I got two harddisks, sda and sdb. Is it possible to install Debian root into software raid partitions sda2 and sdb1 leaving all other partitions 'normal' (not-raid)? do partitions sda2 and sdb1 need to be exact same size and position?
I need to set up a RAID 1 array on Squeeze. I have 3 partitions: sda1 is root, sda5 is home, and sda6 is swap. (sda2 is the extended partition containing home and swap. This was a clean installation, so I don't know what happened to sda3 and sda4...)
All the information that I've been able to find recommends doing something like this:
We've started using Debian based servers more and more in work and are getting the hang of it more and more every day. Right now I'm an ace at setting up partitions, software RAID and LVM volumes etc through the installer, but if I ever need to do the same thing once the system's up and running then I become unstuck.
Is there any way I can get to partman post-install, or any similar tools that do the same thing? Or failing that are there any simple guides to doing these things through the various command line tools?
Just setup with Debian 8 (LXDE) a few weeks ago. Raid10 array was preexisting.
Was working well. After booting I would need to go to the save as then would need to enter the root password and everything would be good.
Can't access the array.
Used to use the command $ mount /dev/dm-o /home/myspace/folder under Debian 7.6 to mount the array (no longer works). blkd lists a /dev/md0 but instead of UUID it is PTUUID
Is there a way where I can take like 50GB from my home folder (I have 375 avail., but using only 22GB) and put it to the root partition? Twice now my system has almost ran out of space on root, so luckly I was able to clear out old stuff so I don't have login issues after finding the hardway the first round lol. I just want to make sure I can login with out being forced back out because root don't have space to let me login.
I have two partitions in LVM. They are added in /etc/fstab to mount automatically. But, they are not working. The process to mount partitions seems to be happening before the service /etc/init.d/lvm2 is started. I can get it mounted using "mount -a" command, but not during the boot time. What should I do get it automatically mounted on every boot?
I have Debian Testing. I am testing XFCE and LXDE and i want to use display manager other than GDM. I have tried SLIM and XDM but when i use them i can't mount partitions and USB through Thunar, PcMan or Nautilus - i get message that i am not authorized (if i do groups in terminal - adm dialout fax cdrom floppy tape audio dip video plugdev games fuse powerdev netdev lpadmin scanner sambashare). When i install GDM everything works fine. I have installed FUSE, HAL, Udev,...I have tried a lot of stuff from AcchLinux forums but nothing worked really.
What i need is to mount several directories from any other partiton (or file system) as a new merge file system that can grow or decrease depending on the free space. As if it was a dinamic RAID,so i can work with huge files distributed over the partitions mounted.
I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.
However, mdadm man page states: "Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."
So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.
RAID CONFIGURATION: ~# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Fri Aug 4 17:38:26 2006 Raid Level : raid1 Array Size : 238950720 (227.88 GiB 244.69 GB)
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
If you want, skip straight to the 'QUESTION' at the end of my post & refer to the 'EXPLANATION' later. EXPLANATION: Using Debian 6.01 Squeeze 64-bit. Just put together a brand new 3.3Ghz 6-core AMD. I had a nightmare with my Highpoint 640 raid controller, apparently because Debian Squeeze now handles raid through sysfs rather than /proc/scsi. The solution to this, of course, is to recompile the kernel with the appropriate module for /proc/scsi support. So I thought "screw that" and I've yanked out the raid card & went with Debians software raid. This allowed me to basically complete my mission. The raid is totally up and running, except for one final step... I can't get the raid to automount at boot.
My hardware setup; - Debian is running totally on a 64Gb SSD. (sda) - I have 3x 2Tb hard drives used for storage on a raid 1 array (sdc,sdd,sde)
New to linux in general and am having issues on setting up a Raid 1 array for two disks on an HP Proliant Microserver which I am looking to be accessible from my windows PC. I have installed the latest version of debian succesfully on a 250GB disk that came with the server. I have added 2 2TB disks which I would like to have in a RAID 1 array and to have visible from windows to store music/videos etc on. I have managed to partition the two disks to FAT32 (which I think is best) and have managed to configure the array so that it shows as active when I use cat /proc/mdstat. I have been following the steps in this article [URl]... squeeze-p2 and trying to adapt it to my situation.
I am stuck on the step to create the file systems using the mkfs command. I try mkfs.vfat /dev/md0 and it comes up with the error mkfs.vfat: command not found. I have tried mkfs -t vfat /dev/md0 and it give the error "mkfs.vfat: No such file or directory" So my question is how can I continue with the process of setting up the array? Or maybe I should be asking is it possible to set up an array with FAT32 formatted disks?
I'm getting weird behaviour while setting up an mdadm RAID1 array on debian 8.2.
After I set-up the array, lsblk shows:
Code: Select allsimon@debian-server:~$ lsblk NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT sda               8:0  0 931.5G 0 disk `-sda1             8:1  0 931.5G 0 part  `-md0             9:0  0 931.4G 0 raid1 sdb               8:16  0 931.5G 0 disk
[Code] ....
After a reboot, lsblk shows:
Code: Select allsimon@debian-server:~$ lsblk NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT sda               8:0  0 931.5G 0 disk `-sda1             8:1  0 931.5G 0 part  `-md0             9:0  0 931.4G 0 raid1   |-md0p1          259:0  0 811.6G 0 md
[Code] ...
I don't know where the md0p1 and md0p2 partitions are coming from. My /etc/fstab and /etc/mdadm/mdadm.conf both have nothing about this in them.
parted shows one partition on md0:
Code: Select allsimon@debian-server:~$ sudo parted /dev/md0 print Model: Linux Software RAID Array (md) Disk /dev/md0: 1000GB Sector size (logical/physical): 512B/4096B Partition Table: loop Disk Flags: Number Start End   Size  File system Flags  1   0.00B 1000GB 1000GB ntfs
Where the md0p1 and md0p2 partitions are coming from?
I'm setting up the array by doing as follows:
Delete existing device (I've done this a few times):
Debian Jessie, 2 hard disks, each having 2 partitions: /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2. Partitions were paired during installation, so they form /dev/md0 and /dev/md1. /dev/md0 is the root (/) partition, /dev/md1 is for /home.
At the end of the install process, I chose /dev/sda1 to carry Grub. And I think this is where I screwed things up.
After removing one of the hard drives, there was no boot capability. So, I installed Grub on /dev/sdb, too. Now it displays the boot menu but cannot find the kernel. This is where I got lost in the process.
Do I need to reinstall the OS or is there a way to fix it? I suppose I have to edit Grub.
I just expanded my raid 5 array from 3*2TB to 4*2TB and mdadm made the grow successfully and shows an md0 dev with the size of 6TB usable data. Now my problem is that Debian (Lenny) dosn´t show the right amount. See below
######### MDADM DETAILS OF ARRAY ########## > mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Dec 14 22:30:46 2009 Raid Level : raid5
Have my proxy running on Lenny and tried to upgrade to squeeze. Originally the system was installed on Etch and upgrading to Lenny was no problem. In the system i have two RAID1 volumes, md0 for / and md1 for /home. For upgrading i added the sources to my apt.conf and startet dist-upgrade. During the installation procedure, when installing udev I was advised to install the new kernel first and continue upgrade after booting the kernel. so I installed the kernel by "apt-get install linux-image-2.6-686. When generating initramfs there was a message, that there are no arrays defined in /etc/mdadm/mdadm.conf I took a look and there were none. mdadm seems to have been update before.
I then added the lines for RAID definition and added the data for UUID The UUID I got from the output of "mdadm --detail /dev/md0" What I don't understand: blkid gives the same UUIDs for the first partitions of the RAID and a different UUID for /dev/md0 and /dev/md1 than mdadm --detail The update of initramfs for kernel 2.6.32 then gives this result:
update-initramfs -u -k 2.6.32-3-686 update-initramfs: Generating /boot/initrd.img-2.6.32-3-686 W: Possible missing firmware /lib/firmware/e100/d102e_ucode.bin for module e100 W: Possible missing firmware /lib/firmware/e100/d101s_ucode.bin for module e100
I've got an 8-disk raid-5 setup, and one of the disks failed. I shut the system down, replaced it, and powered the box back on again. Then, I made a catastrophic mistake; I 'failed' and removed the wrong disk (should have been sdj1, and I typed sdk1 by accident). I tried to re-add sdk1 back to the raid array, but it got listed as 'spare'. My raid array is off-line, since I now have 2 disks unavailable.
I know that the data still exists on sdk1, is there any way I can get the raid array to recognise the fact that it's a valid part of the array, and not a spare disk? At least if I can do that, I'll have a degraded but accessible array, and then I can rebuild the array on the properly replaced disk.
I'm trying to install Debian Lenny on my new Dell XPS 8100 Desktop with 2 x 1To SATA HD. (No Windows or any other OS install is present on the system) The Bios allows me to change the SATA mode to either "ATA" or "RAID"
- When SATA mode is set to RAID, the installation goes without issues, but when it comes to load into the system, I've got that Stage 1.5 Grub Loading... Error 2 problem. I assume this is due to the Bios "RAID" configuration. I then switched the SATA mode to "ATA" in the Bios and now I can see the menu that allows me to boot my debian install but that part actually fails too saying "ALERT /dev/sda1 does not exist"
- When SATA mode was set to ATA, I tried to re-install the system but this time my drive was not recognized by the installer: "No common CD ROM drive"
So, over the next couple of days I'm going to be building a new file server based on Ubuntu with three 2TB drives (and possibly a pair of 500GB drives I already own) in RAID 5 using mdadm. I'm under the impression that I would be able to add the 500GB drives using mdadm; while all drives need to be of the same capacity for hardware RAIDs, software RAIDs are able to work with different-sized volumes. Is this correct?
i right in thinking that mdadm presents the RAID as a drive which can then be used like a single drive would be? create a logical partition within the RAID formatted with HFS+ which can be used for network-based Time Machine backups for a few Macs in my house.Now, this is where I particularly show my ignorant side: is it possible to create a logical partition with a variable size? Similar to how a virtual hard drive works with some virtual machines; a "drive" may take up to 200GB, for example, but is only as large as the amount of data it stores. If it is possible, is this only for certain partition formats?
I have three hard drives in my computer That I want to make RAID 0. All of them already have partitions and data on them. What I want to know is if I can, without losing data, add the disks to RAID and then merge the partitions? All the partitions are of the same type. Or would it easier/better/possible to do this with LVM? Even if I'd have to shrink partitions and copy data to a new LVM one to get it set up properly, would it be better than RAID 0?
I use slackware 13.1 and I want to create a RAID level 5 with 3 disks. Should I use entire device or a partition? What the advantages and disadvantages of each case? If a use the entire device, should I create any partition on it or leave all space as free?
Googling tells you how to resize RAID partitions but not how to resize the underlying disk partitions. In my particular case, I initially sized a RAID array way too large - and when I added another disk to the array, I decided I was wasting too much space.I shrunk the file system, then "grew" the array (/dev/md2) to the smaller size, and resized the file system again to fit. However the actual disk partitions (/dev/sda2, /dev/sdb2, etc.) are still the original size - they are just mostly unused space.As I understand it, the superblocks are at the end of the partition. I believe this means the end of space used by the array on each device, so that the superblock moved to a lower block number when I shrunk the array. However it also means that I need to get the new physical partition size correct to avoid clobbering the superblock.
Is there an easy way to get any partition editor to shrink the physical partitions to the new array size?If not, is the superblock included in the space allocated to the array so that the next partition can start in the very next block, or is it added after the array so I'd need to allow some space for it?
I am trying to setup a H/W RAID-1 matrix but I am unsuccessful. I am trying to get partitions installed as /dev/md0, /dev/md1 but it keeps going for /dev/mapper/isw...The reason is that I have R1Soft backup and it needs to hook the partitions as seen in /proc/partitions from /dev not /dev/mapper/isw. I have tried to boot the installation with various options but nothing!
We have a server with RAID 0 with 4 hard disks on it each 250 GB. Linux kernel must find one hard disk named: /dev/sda with 1TB capacity. right? And also we have 2 partitions on sda: sda1 and sda2. We want to add another partition but we don't have enough space.
Now the problem: If we add another hard disk and run Code: fdisk -l Will the /dev/sda space incremented automatically so we can add new partitions or we must do something?
I have been trying to upgrade my server to Ubuntu 10.04 since it has come out, but I hit a roadblock with my hardware RAID I have two JBODs that work perfectly in Ubuntu 9.10 x64 - but show as seperate, unformatted partitions (one per hdd) in Ubuntu 10.04 x64. Here's the relevant portion of my fstab: