General :: Backup Daemons Or Mdadm RAID Across Internal And External HDDs?
Aug 15, 2010
I'm building a new desktop computer, on which I plan to install Debian Squeeze. I'll have a 1 TB SATA hard drive in the system. I'm also considering using two 500 GB external USB drives, but I'm debating about how I want to use them. Running them all separately for 2 TB of space could be a nightmare, with three potential points of failure, so I was thinking of using the two external drives as a backup system instead.
I'm considering linking the two external drives in a RAID 0 array, then linking that array and the internal drive in a RAID 1 array. I would use mdadm software RAID for all of this so I could use individual partitions in the arrays, avoid hardware dependency, and have greater software control. So now is this feasible to do (having a partial RAID 0+1 setup)? Moreover, what kind of performance could I expect from using potentially slow external drives (one of which I know has a very long spin-up time after idle periods) in a mirroring setup with the internal drive?Would I be far better off using a filesystem backup daemon instead?
EDIT:After some more research and brainstorming, I've decided I might just end up using rsync+cron, lsyncd, or DRBD (assuming it can easily make backups locally). I'd probably have to link up the external drives in RAID 0 (or use some filesystem link trickery). But I suppose such a setup would offer greater control, flexibility in disk capacities (the full system isn't so strictly limited to the capacity of the smallest member of the array), and granularity than RAID 0+1 would.I'm still open to thoughts on the mdadm RAID 0+1 solution, but does anyone have any advice on choosing backup software? For some background on my needs, I'll be using this computer as both an everyday desktop and a personal LAMP server (MySQL database files would be included in the backups).
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
I'm writing a monitoring plugin for a home server RAID, mdadm on Ubuntu 10.4. code...
I'm looking for the possible values of "state" but can't seem to find it anywhere, neither man nor the online documentation I have found seem to have a list.
Does anyone know where to find a list of possible states?
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer! So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
I have an external hard drive with an xfs partition on it. It was using an external journal, but in re-installing Slackware I removed the partition holding the external journal, forgetting what it was at the time. I didn't touch the contents of the external hard drive, but now I can't mount it and the various xfs programs seem to demand that it be mounted in order for them to change anything.Anyone have any ideas on how to change an xfs partition from external log to internal? Failing that, how do I get the information off it?
I have a 4 disk raid 5 array on my Ubuntu 10.10 box. They are /dev/sd[c,d,e,f].Smartctl started notifying me that /dev/sde had some bad sectors and the number of errors was increasing each day. To mitigate this I decided to buy a new drive and replace it.I have an external 4-bay disk enclosure. I failed /dev/sde via mdadm:
I would like to build a NAS from PC (D510MO) running Debian. I have two HDDs (one 3.5 1T and one 2.5 500G). On 3.5 HDD I have already two partitions 100M+40G dedicated for Win7-64. Now, I want to install Debian (second OS) on this PC and to have some kind of soft RAID or disk mirror of 500G space. I am planning to create a third partition on 3.5 HDD of 500G (identical as 2.5 HDD size) in order to have a mirror 500G space.
Please send my some suggestions on where I have to install Debian; on 500G 2.5HDD or 500G 3.5HDD!Will Debian boot from both HDDs 3.5 or 2.5 after I create the mirror? What Linux soft I have to use for mirroring (mdadm)?
I'm running Debian 8.2 and trying to set up so I can plug in a couple of external hard drives that will be used to sync data between systems using rsync.
I've got the rsync bit working how I want, thats not a issue. But what I can't seem to get to work properly is when I plug the devices in, they don't mount automatically.
I've tried various methods to no avail so far, systemd.automount in fstab doesn't seem to want to work, for some reason it gives a I/O error. I've tried setting up udev rules and they don't work either, so I'm a bit of a loss now.
Not sure what info to provide that would be relevant at this time, but can add logs as required easy enough.
This machine is headless, so command line only suggestions would be best. I can access X via the network if I have to, but I'd rather do it by cli for ease of access.
My fstab file
Code: Select all# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=9b4e9dae-ea53-439a-a7fe-87c371c03803 / xfs defaults 0 1 # /home was on /dev/sda9 during installation
I have a (virtual) server with 3 NIC's: 1 external (inet), 1 local and 1 DMZ. This server is my gateway. I would like my internal network, where every server has a static 192.168.0.x IP, to access the internet via the gateway. That means the traffic has to pass from the 'local' NIC to the 'external' NIC, connected to the internet. Which setting do I change to accomplish this ?
Please check the sceenshot (attachment) for my current setup
I have a fairly standard home network set-up with a router and a couple machines on the internal network (with private IP addresses 10.0.0.x). One of these machines is running my subversion server, which is in turn used by my laptop. I am now trying to configure my laptop in a way that I can have one working subversion copy connected to the repository which works both when the laptop is connected to my home network as well as if its connecting from internet. I configured a "virtual server" on the router, so that port 443 goes to the machine with subversion, and this works fine. Now I don't know how to configure the laptop to go to the same machine - because the IP is different if I want to access it from outside and from inside. I tried to connect to the external IP of my network, but the router refuses to let the connection go "out and in again". how to get it configured?
I am using Debian linux with wpa_supplicant on the laptop.
I am using postfix and dovecot installed in one machine running linux centos 5.4 and I have two lan card eth0 and eth1 the eth0 is my IP from ISP the eth1 is my internal IP
Now since my postfix and dovecot are started without any errors what I mean is. I can able to send in yahoo,gmail,etc... and also i can recieve email from outside. My question is how can I restrict this email address prinzz@prinzz.com denied for outside but can send and recieve only in internal while this prinzz2@prinzz.com is allow to send and receive outside and inside.
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it.I then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says : Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output errormdadm: Notenough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90
I have done lots of searching and I haven't been able to find anyone else with the same problem. Whenever I create a RAID with 'mdadm', regardless of level (I've done linear, 0, and 5) the command I use is:
I have a previously defined RAID5 (4 disks). This worked in Ubuntu 8.04.. I recently moved to CentOS5.. and I cannot seen to get the drive back online.cat /proc/mdstat shows, no raid levels (personalities).. and no drives listed.mdadm --detail -scan returns nothing.mdadm --QE returned a UUID string.. and the ARRAY output.I can mdadm --examine all the members of the original array.I am not versed in mdadm enough to really understand what I can run and should not run that would erase the data on the drives. Please assist.. I will try to post exact output of commands.. but the system is kind of unreachable and being rebuilt... i just want to ensure my data on the array is not lost
i have centos 5.5 and qmail installed in it this qmail is used for internal mail , we are not send mail from internal to external ids i.e gmail, yahoo etc, this qmail intalled on 192.x.x.x ip server this ip is not live ip but my problem is that from few days mail are sending from internal to external like indiatimes ,yahoo
If I try and install, any distro on a ide External hard drive, Will I still get a Bootup Grub installed on my internal Windows MBR. I am thinking the answer is yes.
My only goal is to have a raid-5 that auto-assembles and auto-mounts. Hardware: 4*2TB sata (raid disks), 1*500GB IDE (OS disk), 1*DVD IDE all plugged direct into the motherboard (nForce 750i SLI).
Starting partitions on the raid disks: gpt ext4 The problem occurs when I restart my comp after building it for the first time. I am able to see it assemble, I am able to partition it, I even mounted it Once.This is the second time I've built it so I have watched everything that happened. I don't know if this has anything to do with my problem, but when I created the raid my drive designations were: sda - 500GB(OS), sd[bcde] - 2TB(raid). When I restarted: sd[abcd] - 2TB(raid), sde - 500GB(OS).
I HAD a fedora 11 server with md RAID 1 across two 1TB SATA drives. The md0 space was set up to be an LVM PV and the single LVM VG was carved up into 5 or 6 LVs. The MB on this system died and I wound up buying a new one.
Now I want to recover the data from the RAID1 setup on the new server. However, when I attach the two 1TB drives to a new fedora 13 setup, mdadm is only able to find one of the two drives. The partition on the second drive shows "busy" during an mdadm -A -s -v to scan for md volumes.
Well, one drive should be enough since this is RAID1, right? Well, when I do a pvscan -v, the other drive shows up as a "NEW" pv not allocated to a VG. In addition, vgscan does print "Invalid metadata header checksum" when it runs but it doesn't point at any particular PV. I'm afraid to go any further with LVM since I can't afford to lose the data on this system. It is backed up offsite, but the restore will take several days and I can't afford to be down that long.
Are there any tools or techniques where I can dig deeper into what each drive, in the RAID1 pair, has right and wrong with it and pick one that I can force into a usable VG so that I can recover the data?
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code: [root@kilchis etc]# fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.
Code: aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0 mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.
I am new to fedora 13. My problem is that I can not hear any sound from my laptop's internal speakers (that works f9 in windows--so no h/w probs!) but when I connect external headphones, everything is heard.I have tried several commands from various internet sources since past one week. I don't understand many of the commands that I have run, but that has somehow helped me to run media files and hear sound on HEADPHONES.
I'm looking for a way to do a bare metal backup of our server using a tool such as ghost or clonezilla. The limitation is that / is on an mdadm raid 5. The only relevant info I could find on clonezilla's site was:
# Software RAID/fake RAID is not supported by default. It's can be done manually only.
Got a little problem after the install of Fedora 12. First there was not problem in opening the raid-device, after i tryed to automount it with crypttab and fstab im not longer able to open it. Here some outputs code...
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d] mdadm: no recogniseable superblock on /dev/sdb mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.
I ran smartct -l on all the drives in question and they're all in good working order.
Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.
Here is what the "good" drive looks like when i use sfdisk:
Quote:
sudo sfdisk -l /dev/sda Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 0+ 121600 121601- 976760001 83 Linux /dev/sda2 0 - 0 0 0 Empty
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code: /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk) /dev/sde (new 2tb disk - unused) /dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.