I'm looking for a way to do a bare metal backup of our server using a tool such as ghost or clonezilla. The limitation is that / is on an mdadm raid 5. The only relevant info I could find on clonezilla's site was:
# Software RAID/fake RAID is not supported by default. It's can be done manually only.
So far I have tried PING, rsync, Clonezilla and tar. All have one or another problem. I'm sure that this is in part to my own ignorance and partly due to running the 64 bit version of Karmic, but I cannot seem to find anyone who has real answers.
I have two identical servers, one has RHEL 5 and Zimbra installed and the other is currently not really doing anything. Both have hardware RAID (Adaptec) set to RAID10, identical hard drives, etc. The RHEL/Zimbra machine is set up with LVM2. Is it possible for me to hook them up on the secondary NICs and boot the second machine with Knoppix or something else, and easily tell it to duplicate the first machine onto the second, down to the last bit, or do I need to make all the partitions beforehand and dd each one separately?
I would like to try putting some kind of free "bare metal" visualization for desktop useage on my laptop. I've been googling about the possibilities, but still I'm not sure which would actually work in my case. I've seen VMWare ESXi which looks ok, but unfortunatelly it is meant for servers and I can't have ESXI and Sphere client on same laptop. Another candidate I found is KVM, but as much I've seen it requires VTx VTd support from hardware, which my laptop can't provide. The same requirements must be met for Citrix Xen Client, which is meant for desktop virtualizations, but because of lack of VTx and VTd, can't be used in my case. Is there any other possibility? Currently I'm using VirtualBox and VMWare player for virtualization purposes, but I would like to pull out more performance out of it, and a heavy OS on top of another heavy OS just isn't the best way.
Can I install VMware ESXi on my Dell Inspiron laptop having core i3 processor which has got 4GB of RAM? I can allocate 100GB of hard disk space for that. This is just to practice and explore the features available. Can I install Vmware ESXi iso on KVM hypervisor like installing guest OS?
I am trying to figure out how to get an almost bare metal virtualization running, and having a hard time getting it going I tryied the Virtual Machine manager, but it wont let me do full virtualization.
I have worked on Xen which is being shipped with RHEL 5.4, Is it possible to install Xen hypervisor directly on bare metal, so that we can save resources. I searched in Xen Official site, but could not recognize the product that can be directly installed on bare metal like VMware ESX.
I have a server with an old version of Fedora on it, Fedora 7, I know its old and that I should have upgraded it. But I haven't I plan on doing it now but I ran into a hardware failure and had to switch to a different set of hardware. I tried going into rescue mode using the fedora 7 install disc, but st0 for tape drives was not available. So I tried using the newest fedora distro, Fedora 13, installation disc and st0 is still not available on there. How to do a restore from tape?
I'm running a x86-based Debian Squeeze LAMP server which is also my gateway between my home network and the big wide world (Shorewall/Shorewall6) As you can guess, when (as it did just recently) the hardware dies and needs to be replaced, the rebuild of a machine which has been tuned and tweaked over years is "interesting", to say the least. I am looking for some software which will allow me to do a bare-metal restore of the software and setups (data is accommodated already, so that part can be ignored) I'd like to use something to create a boot disc (CD/DVD) that I can put in a new machine and get my original setup installed on the new tin automatically.
I looked at Mondo, it looks ideal, but Google hints at problems with GRUB, and incompatibilities with Debian Squeeze....so the questions are: 1) Has anyone run Mondo on Debian Squeeze successfully? 2) Is there a good howto for Mondo on Debian? 3) Is there an alternative that runs on Debian and that fits my requirements?
Are there any open source or third party applictions can could do a bare metal recovery on Debian or any linux machines? We are looking for a solution that won't need a shutdown or reboot.
One of my desires is to set up my main workstation as a bare metal hypervisor because:
- Legacy issues mean I can't migrate to Linux in one step. - The flexibility offered by virtualization is appealing.
Are there any resources out there that explain how to set up and manage fedora as a hypervisor and guests within that environment? I expect that I'll need to install a number of packages and to rebuild the kernel but I'm not aware of enough details to get started.
I'm building a new desktop computer, on which I plan to install Debian Squeeze. I'll have a 1 TB SATA hard drive in the system. I'm also considering using two 500 GB external USB drives, but I'm debating about how I want to use them. Running them all separately for 2 TB of space could be a nightmare, with three potential points of failure, so I was thinking of using the two external drives as a backup system instead.
I'm considering linking the two external drives in a RAID 0 array, then linking that array and the internal drive in a RAID 1 array. I would use mdadm software RAID for all of this so I could use individual partitions in the arrays, avoid hardware dependency, and have greater software control. So now is this feasible to do (having a partial RAID 0+1 setup)? Moreover, what kind of performance could I expect from using potentially slow external drives (one of which I know has a very long spin-up time after idle periods) in a mirroring setup with the internal drive?Would I be far better off using a filesystem backup daemon instead?
EDIT:After some more research and brainstorming, I've decided I might just end up using rsync+cron, lsyncd, or DRBD (assuming it can easily make backups locally). I'd probably have to link up the external drives in RAID 0 (or use some filesystem link trickery). But I suppose such a setup would offer greater control, flexibility in disk capacities (the full system isn't so strictly limited to the capacity of the smallest member of the array), and granularity than RAID 0+1 would.I'm still open to thoughts on the mdadm RAID 0+1 solution, but does anyone have any advice on choosing backup software? For some background on my needs, I'll be using this computer as both an everyday desktop and a personal LAMP server (MySQL database files would be included in the backups).
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
I have a scheduled backup to run on our server at work and since the 7/12/09 it has be making 592k files instead of 10Mb files, In mysql-admin (the GUI tool) I have a stored connection for the user 'backup', the user has select and lock rights on the databases being backed up. I have a backup profile called 'backup_regular' and in the third tab along its scheduled to backup at 2 in the morning every week day. If I look at one of the small backup files generated I see the following:
Code:
-- MySQL Administrator dump 1.4 -- -- ------------------------------------------------------ -- Server version`
[code]....
It seems that MySQL can open and write to the file fine, it just can't dump
I'm setting up a DIY NAS system, running Ubuntu server from a CF and using 2 SATA drives. I only need RAID1. so that should do. Setting up RAID1 with mdadm is straightforward, and all of my tests with failure/recover scenario work fine in VirtualBox. Most of the tutorials on the net are talking about using mdadm in conjunction with LVM2. What is the reasoning behind LVM2 over mdadm?
Does anyone know of any decent enterprise level backup solutions for Linux? I need to backup a few servers and a bunch of desktops onto one backup server. Using rsync/tar.gz won't cut it. I need like bi-monthly full HDD backups, and things such as that, with a nice GUI interface to add/remove systems from the backup list. I need basically something similar to CommVault or Veritas. Veritas I've used before but it has its issues, such as leaving 30GB cache files. CommVault, I have no idea how much it is, and if it supports backing up to a hard drive rather than tape.
I'm the user operating ubuntu 9.10 server. I made configuration with software mirroring(raid1). when I checked cron, I found the mdadm in cron.d dir. 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all -- quiet checkarray is supposed to be run on the first sunday of every month. so I just want to know
1. what does checkarray do exactly? 2. does it make a stress to system? 3. Is there any problem if I get rid of the script from cron?
I've got a strange problem. I have the following system:
[Code]...
After doing this install everything works fine as expected. I can reboot, shutdown and bootup as I much as I want to and the system will work. Now, I proceed to do the following (as root obviously - sudo bash)
[Code]...
When I try to restart the system now, I get to the grub boot loader and then it just breaks with the following message I've identified 'mdadm' as being the culprit here. Any idea why this would happen? Just a subnote. The reason I'm installing mdadm is to create a soft-raid as follows with the remaining space on each drive:
Created my own file server/nas, but get stuck in a problem after couple of months. I have a server with 4x 1,5tb disks, all connected to sata ports and 1 40gb ata133 disk running ubuntu 9.10 x64 amd. I've created a raid5 array using mdadm. It all worked great for couple of months but lately the raid5 array is degraded. disk sdd1 is faulting every few days. I have checked the drive but it is fine. If I re-add the disk and wait for 6 hours my raid5 array is all fine again, but after a few shutdowns, it is degraded.
my mdadm detail:
Quote:
root@ubuntu: sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Mon Dec 14 13:00:43 2009 Raid Level : raid5
I'm setting up Ubuntu 10.04 Server x64 on a Gateway DX4710. I installed on a 500GB SATA, using encrypted LVM, added webmin, and used ufw to configure iptables. All seemed fine.I then set up RAID1 on two 1TB SATAs. Using webmin, I created Linux RAID partitions on sdb and sdc. I then ran ...
All still seemed fine.I could see /data in the webmin filesystem list, and had ca. 1.4TB total local disk space.At that point, I decided that I really wanted an encrypted filesystem on /dev/md0. I also needed to tweak the fan setup. And so I shut down, without adding /dev/md0 to fstab. And it was probably still synching.Now /dev/md0 is semi missing. That is ...
sudo mdadm -D /dev/md0 => doesn't exist sudo mdadm -E /dev/sdb1 => part of RAID1 with sdc1 sudo mdadm -E /dev/sdc1 => part of RAID1 with sdb1
What do I do now? Can I recover /dev/md0? Is it just that I didn't add it to fstab? Can I just do that now? Or do I need to delete sdb1 and sdc1, and start over?
intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.
[Code]....
then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...
at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0
I have ubuntu server 10.04 on a server with 2.8ghz 1gb ddr2 with the os on a 2gb cf card attached to the IDE channel and a software raid5 with 4 x 750gb drives. On a samba share using these drives I am only getting around 5 MB/s connected via wireless N at 216mbps and my router and server both having gigabit ports. Is a raid 5 supposed to be that slow? I was seeing speeds of anywhere from 20-50MB/s from other people and am just wondering what i am doing wrong to be so far below that.
I'm just about to commence a full reinstall of my home media server. Planning on using 1x 1tb and 7x 1.5TB drives in raid 6. I notice the version of mdadm distributed in Ubuntu is 2.6.7.1, but versions exist up to 2.6.9 (excluding all the 3.X ones) Is it worth using a later version? Or is 2.6.7.1 used for a particular reason?
I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?
My fileserver initially had 3 1TB drives in RAID 5 configured with mdadm as /dev/md1. (System root is a mirrored raid on /dev/md0) I went to go add a 4th 1TB drive to /dev/md1 and grow the raid 5 accordingly. I was initially following this guide: [URL] but ran into issues on the 3rd and 4th commands. I've been trying a few things to remedy the issue since, but no luck. The drive seems to have been added to /dev/md1 properly, but I can't get the filesystem to resize to 3TB. I also am not entirely sure how /dev/md1p1 got created, but it appears to be the primary partition on the logical device /dev/md1. Relevent information:
The filesystem originated as ext3, I believe its showing up as ext2 in some of these results because I disabled the journal when doing some initial troubleshooting. Not sure what the issue is, but I didn't want to blindly perform operations on the filesystem and risk losing my data.
One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.
sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:
when I start my raid5, only 2 disks of 3 are active on md0. The 3rd disk is inactive on md_d0.When I do mdadm --examine, the two active disks report 2 active, 2 working, 1 failed. the inactive disk resports 3 active, 3 working, 0 failed.
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this: