Server :: Creating LVM With Existing Raid - With Data On It
Apr 24, 2009
I have a system that has the following partitions:
Now SDC is a new drive I added. I would like to pool that new drive with the raided drives to give myself more space on my existing system (and structure). Is this possible since my raid already has data on it?
Currently I have 3 hard drives 2pcs 10gb almost the same 1pc 20gb
I have a layout of (10.2gb) /dev/hda1 boot 104391 83 Linux /dev/hda2 9912105 8e Linux LVM (10.1gb) /dev/hdb1 9873328+ 8e Linux LVM (20.4gb) (unpartitioned)
The two 10g is setup as lvm and I want to make raid1 using the 20gb hdd. Almost all I see is raid1 first in the internet.
I have a RAID5 with 5x2TB HD's and I'm planning a major hardware overhaul. My server currently runs on a Pentium4 3.2 Ghz (pre multicore technology) on a SuperMicro mobo. I'm planning to switch to an AMD Phenom II X6 1100T Black Edition Thuban 3.3GHz, 3.7GHz Turbo 6.
So here's the question. Can I just plug my drives to the new board and restart the RAID like nothing happened? I don't have space to backup all my data if I have to recreate the RAID from scratch. Intel and AMD should be binary compatible (I mean the RPM's work) so I should be able to invoke mdadm to assemble the RAID after I install Linux on the new server. Right?
Alternate Heading: Unable to use opensuse partitioner to fully format a Seagate 1Tb drive OK. Swapped over a motherboard to increase the number of slots available for hard drives so that I could expand my raid array (4 X 1Tb drives). Discovered I had no thermal paste so all delayed for 24 hours while I bought some, miscalculated on rebuild and had to reinstall OpenSuse but in the end system is now up and running. Unfortunately when I formatted my new 1Tb drive (Seagate) it formats to 931.50 Gb while the other 4 drives formatted to 931.51 Gb (they are WD). I'm now in the position that when I try to add this new drive I get :
mdadm: /dev/sdc1 not large enough to join array
Is there any way I can resize the existing devices down to 931.50 Gb so that I can add in the new drive without having to restore the array?
We have been using Ubuntu Server at our department since several months now. It hosts a website, e-mail and nfs(only intra).
It was set-up as RAID 0 with two 1TB Hard drives but I want to change it to RAID 1 for fault tolerance. Is it possible to change existing RAID level? If yes can someone point me to the proper place?
I tried "mdadm" documentation and level set option is available but no explanation available that whether it is only while creating the array or it can change the level too.
Its from a Synology Box with 3 disks, which one is damaged. But this disk wasnt in use.Take a look on the raid-size of 493 GB - and the both available disks with 250GB..) On the others there were a linear raid. during this damaged disk the synology-device tells me, that the volume was crashed.But it look like, that this disk was not mounted into this volume.Quote:
DiskStation> mdadm --detail /dev/md2 /dev/md2: Version : 00.90
I want to upgrade from another distro to ubuntu server for a few reasons. The only problem is I have a lot of data that needs to survive. here is how my computer is setup. I've 5 drives on the computer,
A- 10gb drive for OS and swap only, no data
B,C,D,E - 4x 500 GB drives in a LVM. they make up one large drive with xfs and this volume has about 1.2 TB of data. there is nothing fancy on it, no encryption and no software raid of course the little 10gb drive can be formatted no problem, but the LVM needs to be migrated over intact.
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
In a nutshell, our RAID 1 array was rendered broken and we were advised that core lib files were missing and the OS needed to be reloaded... a quote from our server host:"The OS is not healthy.This server will need a reinstall. Libs are missing." This was after having replaced what we though was a faulty /dev/sdb. So they reloaded the OS (Debian 5.0.2 x86_64) on 2 FRESH drives, and installed the old /dev/sda as /dev/sdc once the reload was completed. Here's the output of /etc/fstab on the fresh install so we know what we're working with:
The one problem I see myself running into is /dev/md1 and /dev/md2 are currently in use by the new system, so I cannot mount it there. I should also note, reloading the OS is a viable option if needed as we haven't started configuring the server yet. So if we need to reinstall the OS and assign the NEW RAID arrays to something other than /dev/md1 and /dev/md2 then we can do that.
I rebuilt a server and am now trying to recover my large data arrays. The server was ubuntu 10.04lts before. I decided to rebuild it with CentOS simply because I am more familiar with it. I had 2 raid-5 arrays on the old server:4 x 1tb -> md0 5 x 2tb -> md1 The newly built server does not know about these arrays yet. How can I reassemble the arrays without loosing my data? I know the data can still be accessed because booting the server with a live-cd mounts and shows the arrays just fine. Should I boot with a live cd and copy the mdadm config file?
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
Right now I have a 320GB system drive and 3TB data drive. I want to add two more 3TB drives and do a software RAID5 3x3TB. Is that possible without losing the data that is already on the data drive?Just want to make sure before I bought the 2 two drives. Not looking for instructions on how to do it,but if you want to include some that would be great too Just making sure it will work.
Could we create mirror of the existing volume in Linux. Is yes please let me know the procedure to create the mirror of existing Logical volume in Linux.
I have an existing unix user that some how didnt make it into the copy over to our LDAP server. How do I add an existing unix user to an existing LDAP directory? Will ldapadd work? I was under the impression ldapadd required an ldif file to work properly.
I have just finished installing (after hard work ) Centos 5.4 x86 configured with Snort & Snorby as frontend web, i would like to create from this installation kind of image that could fit to almost any hardware type.
I'm setting up my profile in gnome right now, with things like fonts, themes, wallpapers, iceweasel settings, menu settings, and I'm going to be adding a couple new users. Rather than re-do everything again, I thought I'd just create the new user, copy over my home dir, chown it to the new user and then login to make minor final adjustments, like specifiying where the music dir is and such.Just wondering if there'd be any problems with this, since it's just an idea I think should work but have never tried it. Any experience, or warnings?
I'm looking for a way to create a live cd from the existing image. I'd like to include some sort of installer, I've found gui remaster utilities, but none for the shell only. I need to setup the image to automatically login, so the user could just pop in the cd and start it up without a monitor or keyboard.
I'm having some (well, a lot actually) of problems trying to get OpenSUSE 11.2 installed on my home PC. I am attempting to set up a dual boot configuration with Windows 7 installed on an bios Nvidia RAID 0.I was able to shrink the partition in Windows, and rebooted onto the net install for OpenSUSE (the MD5 validated DVD install failed multiple burns with "Repository not found"). I get into the graphical installer portion with no problems off the net install CD. However, the installer is not recognizing that there is an existing RAID 0. It lists the 2 SATA disks in the RAID separately. I can click on SDA1, and both SDB and SDB1 and it shows the disks, but does not recognize any existing partitions. If I click on SDA I get an immediate segfault in YaST and drop back to the text mode installer menu. It is loading the nv_sata module just fine.
From forum searches and google it seems that this is not usually a problem. My motherboard is a Gigabyte GA-M57SLI-S4 with the Nvidia Nforce 570 chipset running an AMD X2 64 3800+. Removing the stripeset and redoing it as Linux software RAID is not an option, I do not have enough space for a total backup/restore. Anything I do has to be nondestructive to the existing Windows installation.I really want to have a linux installation but between the DVD installer failing and now this issue I am about ready to give up on it.
I have recently had a problem with my 10.04 server machine. It will not boot, it seems to be taking forever on the loading screen (normally headless server, but I connected monitor when I couldn't ssh), but that's not why I'm here.
Knowing that I do rsync backups every night at midnight of my machine I just bit the bullet and formatted my / partition. Reinstall went fine, I turned off automatic updates (I suspect an update caused the problem) But now I cannot mount my jmicron raid 1, which is where my rsync backup is (doh!).
sudo fdisk -l
Code: WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders
I have an old server I build 7 years ago. I have a RAID5 with 5x2TB HD's and I'm planning a major hardware overhaul. My server currently runs on a Pentium4 3.2 Ghz (pre multicore technology) on a SuperMicro mobo. I'm planning to switch to an AMD Phenom II X6 1100T Black Edition Thuban 3.3GHz, 3.7GHz Turbo 6.
So here's the question. Can I just plug my drives to the new board and restart the RAID like nothing happened? I don't have space to backup all my data if I have to recreate the RAID from scratch.
I've been thinking about this pretty heavily and am thinking about using LVM to store my media collection. The USB hard drives (3x1TB) are plugged into my Mythbuntu server and are never unplugged. The only issue I forsee is that the USB drives spin down when not in use. Will this cause any issues with LVM? I have about 750GB of data on one of the drives and, obviously, want to keep this data. I think the existing data makes it not possible to use a LVM mirror (correct me if I'm wrong).
I was thinking that I could create a 2 disk LVM with the unused data. Then, copy the existing data to the logical volume. Lastly, add the third disk into the volume group. I know doing this does not add any redundancy. Would it be wise to add a RAID1 or RAID5 on top of the volume group? Does using USB drives make this unstable?
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
Code:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file.
[code]....
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
I have two SAS RAID controller cards in a Dell server in slots 2 & 3, both with an array hanging off them. I went to install a third card into slot 1, but then when it boots it says two of my sd's have bad magic number in the super-block and it wants me to create an alternative one, which I don't want to do. If i remove the new card, the server boots perfectly like it did before I added the new card. Is the new card trying to control stuff that isn't hooked up to it because its in slot 1, so its confusing RHEL?
I have a home samba server with a 3ware Escalade 8506-8. I have 5 x 500 gig hard drives in a RAID 5 array. Recently, my 8506 died and I need to get a new one. However, I saw a 3ware Escalade 9500S-12 on ebay for about $20.00 dollars more than a replacement 8506-8.
My question is, if I put my drives on the 9500S, will it recognize my existing RAID array? Or will it want to build a new RAID array and format all of my data?Hope I have asked this question clearly, little short on sleep this week.
In the process of preupgrading to FC12. Towards the end of the process I get a warning that my /boot partition isn't big enough (12 recommends minimum of 300Mb).
Is there a tool I can use to resize my existing partitions WITHOUT data loss? I've been using gparted up to now for sorting partition stuff, does that maintain data when resizing (assuming I run from a boot CD or USB rather than a running system)?
If I have a partition like /dev/hd1 that is unencrypted and want it to be encrypted, but want to keep everything currently in that partition, how can I do that?
I have installed ubuntustudio 9.10 on my dell dimension 1100 desktop and im trying to setup raid-1 because i'm constantly worried that my hard disk is going to fail. i have 2 drives. one 40gb and one 80gb. so, i created a 40gb partition on my 80gb drive and i want to raid this partition with the 40gb drive. is this possible? and am i right in thinking that i can raid everything including /boot?