I currently have an Ubuntu 10.04 Server with 10 2TB hard drives (Hot Swappable). I discovered that having a software raid over 16TB is not supported, so I split the drives into 2 sections and have 2 Software Raid arrays storing my movies, audio, pictures, and other software. The total current usage is around 7TB.
Since backing the files up to DVDs or even BlueRay is laughable, I am going to backup the system to 2TB hard drives probably 4 of them, the problem becomes that I can only hook one backup drive at a time into the system using a hot swap tray. Now I know that I can do this manually by copying the files one at a time to the drive until it is full, switching the drive out and repeating this, but I am hoping for an automated solution, Start backup, plug first drive in, system fills up drive, swap and repeat. Also it would be nice if the system remembered what had already been backed up so when I add files to the system, I only need to attach the last drive and not start the process over.
Our company just bought faster hard drives for our webserver. A lot of the software and services set up on this machine have had config files set up, etc.. It would take a while to rebuild it from scratch, which i may have to do. I know most config files are in /etc and i can use apt to spit out a list of installed packages.
Any tips that i may want to know to avoid any gotchas here? We need to minimize downtime, of course, and get everything up like it is now.
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
I'm looking for a way to set up a backup with two 2TB hard drives. What I want to do is basically mirror two drives. What ever I copy to the one drive I want it to be "cloned" with the other drive.
Is there any software that could help me with this? Does anyone know of a better to do something like this?
Because a lot of users are using laptops now, and many want externals hard drives for backups, is there a program in Ubuntu (cross-platform with Windows would be nice) that backs up files to an external hard drive when the external drive is plugged in or on a timely basis? All backup systems seem to have a timed system, but these systems have annoying pop ups if your backup location is non-existant (e.g. Deja Dup).
Use case 1: I plug in my external, the program recognizes that and starts a backup.
Use case 2: I leave my external in all day and every 6 hours, my laptop backs up my files to it.
I am using software RAID in Ubuntu Server Edition 9.10 to mirror(RAID1) two 1TB harddrives. These are used for data storage and websites.I also have a 80GB harddrive for the operatigsystem. This drive has no backup or RAID at all. Should this drive crash and the system therefore to become no longer bootable, will I be able to recover the data the 1TB drives or should I backup the 80GB drive as well?
I have this server board by supermicro: H8QM3-2 I would like to know if there is a sas driver for the hard drives? I want to use ubuntu 64bit but fail to find a driver for my sas hard drive.
I have a second hard drive that I plan on placing in a computer I wish to use this computer as a server so my question is how could I use these 2 hard drives as one lets say one is 80Gb and the other is 40GB how could they be shown as 120GB instead of being seperate The installation of server planned on being used is 10.04.
The 400GB has the OS and SWAP while the 1TB are going to be used as storage....
Now for the problem, when I have both the 1TB drives in I can not format or mount either 1TB drives. Says Device is in use or "The device file '/dev/sdc1' does not exist"
Now if I take one of the 1TB drives out I can format, partition, and mount it no problem...it only seems to be a problem when both drives are connected...
Here's the background. 1x Windows Vista laptop (laptop1) 1x Windows 7 laptop (laptop2) 1x Mac laptop (laptop 3)
I am running Ubuntu server with 3 hard drives. I have Webmin installed. So far, I have the three laptops being able to connect to samba and accessing /home/insert_user_here. All laptop users have access to my /media/data2 (photographs, videos). That's all good. At first, I couldn't get other users but laptop 3 to access /media/sdb1, but I fixed that by changing permission to 755 so I guess everyone can access this. Atm, I want to only allow laptop #3 to connect to /media/sdd1 (be able to read/write/etc.) while laptop 1 and 2 can't even see the files. Also, laptop 1 and 2 can't seem to read and write through file share.
I recently moved my ubuntu 9.10 server over to a new case with more memory, and when I did the webserver worked like a charm, but now apt-get is broken. Checked my 70-persistent-net.rules and saw the new NIC was there, so I edited my interfaces to use eth1 for the new NIC, but apt is still not working, failing to fetch everything. here is my 70-persistent-net.rules
I've installed Fedora 10 short time after it came out. Now I am having some problems unmounting thes drives on restart or shutdown. It hangs at the stage of 'unmounting file system'. I've looked into this matter and discovered that those drives are automatically mounted and shown on the Gnome file browser. As the /etc/fstab indicates, it is not mounted by it. I must have done something to have all the hard drives shown in the file browser and now Fedora seems to be unable to unmount them.
Quote:
# # /etc/fstab # Created by anaconda on Mon Sep 7 20:25:11 2009 # # Accessible filesystems, by reference, are maintained under '/dev/disk'
I have a 3 year old PC with 4 internal SATA ports. My old SATA hard drives, all smaller than 2TB, work fine. If I buy a 3TB SATA hard drive, will it work in Linux? Will Linux with GRUB be able boot from such a hard drive without a BIOS upgrade? With a BIOS upgrade? It's fine for me to upgrade my Linux to the newest kernel.
My questions begin with Virtual Box. I have Windows XP installed via Virtual box. Ordinarily, I hate everything about windows, but unfortunately some things related to my job I am still in need of having some access to some form of Windows. I am wanting it to recognize all of my multiple hard drives that are installed on this system, 4 of them to be exact, so that I can utilize files from all of them.
I have been researching the web for a program which will allow me to backup my entire hard drive so that I can restore my system if need be. I am however unsure which is the best one to use if I want to achieve this:Somehow I want to back up my hard drive containing my ubuntu system byte for byte so that if the hard drive were to fail I could simply go to the store, get a new hard drive, restore my backup and be up and running again without having to do any re installments of ubuntu or any other programs for that matter.
What is the easiest program that does this? I would like it to support incremental backup.rsync with the "Back in Time interface"?bacula?
Image Hard drive Ubuntu Operating system 9.10 Complete back up and restore. Changing over Hard Drives need a complete back up not just save files. So the image can be restored on any hard drive that restores the computer to its original state before it was imaged.
I just finished a build of a new GNU/Linux boxen with openSUSE 11.2. I have a MSI Big Bang Xpower X58 motherboard which has two SATA controller chips, one is the standard Intel ICH10R chip for SATA 3.0 Gb/s and one is the Marvell 9128 chip for SATA 6.0 Gb/s. The BIOS recognizes the Western Digital Caviar Black 6.0 Gb/s drive on either SATA controller chips, /however/ I am unable to install (and boot) when the drive is connected to the Marvell controlled ports. As you can guess, I'd like to boot from the faster interface!
1. The BIOS allows me to select the Western Digital drive as a secondary boot device, so I know, at least at the BIOS level, it's there. This is true whether I have the drive connected to the Intel or Marvell ports. (The DVD drive is the primary boot device.)
2. When trying to install openSUSE 11.2 from DVD, the installer says that it can't find any hard drives on my system when I have the drive connected to the Marvell port. The installer finds the drive fine when it is connected to the Intel port.
3. I installed everything with the drive connected to the Intel port. I switched the drive to the Marvell port afterward and the system refuses to boot completely, stalling at some point where it starts to look for other filesystem partitions. This led me to conclude that perhaps the problem is with openSUSE and not hardware weirdness with the system having two separate SATA controllers?
I have a personal ubuntu server that provides apache, glassfish, firewall, routing, email, CVS, MySQL, etc.... This server has been running for a while with two hard drives configured into a RAID 1 array. The array has two partitions, one for swap and one for the data. I currently back up the data with a removable hard drive. I use dd and create an image of one drive and the MBRs (partition tables) of each drive.In a disaster situation I can use this data to recreate one drive and then re mirror it to the second, or just boot the back up.I like this solution because I can easily recover from bare metal, and the backup is transparent. I can browser it if needed since its an uncompressed image of the drive. The one drawback is that I need to reboot the system with a linux CD to do the backup.
My hard drive space is almost at capacity. So what I want to do is add a third drive to the array and migrate it to RAID 5. However this will cause my current backup method to no longer work. How can I back up this RAID 5 array. I need to back up the entire system, and not just the data. I have made many tweaks to the system over the years that it has been running that I can't lose if a restore is needed. I have seen a large thread here that people have been using tar. My concern with tar is how do you use a tar archive to restore a system to a new array. Im assuming that you would need to setup the array and then just restore the archive? Also, i don't have much faith in using tar on a running system. Doesn't this open yourself up to corrupted backups? My second idea is using rsync. While I consider myself experienced in linux from 10 years of personal and professions use, I have not had much experience with this utility. Would rsync provide a more reliable way to backup a running system that would enable a bare metal restore later? I once read something about people using rsync with hard links to create a backup that could store many incremental backups.My main concern with both rsync and tar is not being able to restore the OS to the state that it was in at the time of the backup.
I have tried several places for help but I am getting no where...Here is my background.I have spent all weekend to replicate my development server back at home. I have an Apache remote server with 3 IP based virtual hosts pointing to
[URL]
Now I have been able to set up a VM on my desktop, installed the OS, the applications, the db server, apache etc. Everything is looking good so far. So right now I have,
[URL]
So when I go to 192.168.0.111, I go to [URL] so I guess apache is working aswell.What I want to do is, instead of going to [URL] I want to change it to another address such as a.me.add1How can I do this? I am looking through the virtual hosts section, I have changed server name entry etc but its not working.Can you tell me in big picture what I would need to do to set that up? My current set up doesnt really help me much once the site get the www address.tell me if Document Root of IP address 192.168.0.111 points to [URL] will it always resolve into that webaddress. That is if I enter 192.168.0.111 the browser will redirect it to [URL].
I am researching how to make an effective backup on Ubuntu Server. This server will have Vsftp, VPN, Samba stuff , many other added packages +many printers, many users + data. I know I can use tar for the data /u no problem. 1. I was testing tar on the /home directory on a few user directories. I then created a new directory and did a restore of the users directories on it. I noticed the /home/user owner and group were root. The files in each directory remained the same. This gave me concern. If I had a crash and had to restore these to a new HD. I would have to change these, what else would I need to change? 2. Since I have many config files, how do I back up them? I know I can do a dump, but then users shouldn't be on the system. The system files will change as they add users, printers, etc, and asking users to not work, is not really an option while dump is running. I thought I could do a tar on whole system. (cron late at night .. not as many users) Then in event of crash of HD.
1. Boot from live cd 2. format the new drive 3. tar back in the whole system
Will this work right? Is there something I am missing?
Attempting to create a backup script to copy files from one file system to a remote file system.
When I try this I get:
Quote:
# tar -cf - /mnt/raid_md1 | gzip -c | ssh -i ~/.ssh/key -l user@192.168.1.1 "cat > /mnt/backup/fileserver.md1.tar.gz" tar: Removing leading `/' from member names Pseudo-terminal will not be allocated because stdin is not a terminal. ssh: Could not resolve hostname cat > /mnt/backup/fileserver.md1.tar.gz: Name or service not known
[Code].....
I know that the remote file system dir is RW and the access is working fine. I am stumped...
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
Having problems with external hard drives. I may be wrong, but I suspect they originated with an upgrade to 10.04 last Christmas. Around that time I also started using Amazon's S3 storage system, and, as a consequence, I stopped using my WD 80G external drive, previously used to backup my important files.
A week or so ago I decided to start using the WD drive again. I can't remember exactly what I did, but it wasn't happy - never caused any problems before. When I plug it in, the on-off light as the front keeps flashing on and off, and when I try to remove the drive I get the message: Error unmounting volumne An error occured while performing an operation on "My Book" (Partition 1 of WD 800BB External): The device is busy
Details: Cannot unmount because file system on device is busy Assuming the device had died - it's about 5 years old - I bought a 160G Samsung S-Series drive - my but they do look neat! Unfortunately, this doesn't seem to have solved my problem. I plugged the new drive in, and it happily appeared on my desktop. It seemed a good idea at the time, but I then started to format the drive - using the default option of FAT. All went well at first, but then the format process stopped.
My new Samsung drive now seems to be operating pretty much as the WD device, I can't copy to the drive, and attempts to unmount it generate a response similar to what's happening with the WD drive. Currently, although plugged in, I can't see the drive on my desktop, although it appears under Places. However, when I try to mount the drive, I get the message: Unable to mount SAMSUNG A job is pending on /dev/sdb1
I have Fedora 14 installed on my main internal drive. I have one Fedora 14 and one Fedora 15 installed on two separate USB drives.When I boot into any of these drives, I can't access any of the other hard drives from the other drivesll I can, but just the boot partitions.Is there any way of mounting the other partitions so I can access the information?---------- Post added at 12:42 PM ---------- Previous post was at 09:34 AM ----------I guess even an explanation on why I can't view them would be good too.
I have a scheduled backup to run on our server at work and since the 7/12/09 it has be making 592k files instead of 10Mb files, In mysql-admin (the GUI tool) I have a stored connection for the user 'backup', the user has select and lock rights on the databases being backed up. I have a backup profile called 'backup_regular' and in the third tab along its scheduled to backup at 2 in the morning every week day. If I look at one of the small backup files generated I see the following:
Code:
-- MySQL Administrator dump 1.4 -- -- ------------------------------------------------------ -- Server version`
[code]....
It seems that MySQL can open and write to the file fine, it just can't dump
I have a SATA drive that worked fine. Then I installed two more hard drives into my system. When these hard drives are installed, if I try to access the SATA drive in Linux, it will start lightly clicking and then the drive will become unavailable. If I power on the machine without the other two hard drives then it works fine. What could be causing this to happen? I don't think it's heat because the two hard drives are far away from the SATA drive.
For our workgroup I set up a server which is basically 10.04.2 with kernel 2.6.32-32-server on a SSD and all the data on a RAID 5 consisting of 4 2TB hard disks, thus a maximum of 6TB space for data on the RAID. Having multiple users with different amounts of data from different scientific data source I set up an lvm on top of the RAID
--- Physical volume --- PV Name /dev/sdb2 VG Name home-data PV Size 5,45 TiB / not usable 3,00 MiB
[code]...
Here is the problem: The volume Genomes (or /genomes) is half full
but the system states it as full whenever I try to add more data (tried cp and rsync). There is no quota set to the volume (I have quotas in place for users home folders. These are only for max amount of disk space, not max file number, and I am still able to move/add files elsewhere so there seems to be no interference).
Does anyone know of any decent enterprise level backup solutions for Linux? I need to backup a few servers and a bunch of desktops onto one backup server. Using rsync/tar.gz won't cut it. I need like bi-monthly full HDD backups, and things such as that, with a nice GUI interface to add/remove systems from the backup list. I need basically something similar to CommVault or Veritas. Veritas I've used before but it has its issues, such as leaving 30GB cache files. CommVault, I have no idea how much it is, and if it supports backing up to a hard drive rather than tape.