General :: Rsync Incremental Backups Rather Than Full Backups?
Nov 12, 2009
How do you get Rsync to do incremental backups rather than full backups? At the moment I have a script that will create a backup folder (if it doesnt already exist) then copy the source files into the backup directory with the command
rsync $VERBOSE --exclude=$TARGET/ $EXCLUDE --exclude '/Ls-wtgl1c8/**' -rt --delete $source/ $TARGET/$source/ >> $LOG_FILE
Target is where the files will be backed up to Sources is the dir(s) to be backed up Exclude files is the list of files not to backup
log file is where the output will be saved to. At the moment it only does full backups, but I would only like to do incremental, how would this be achieved? Am I missing out an option in the Rsync that is required.
View 9 Replies
ADVERTISEMENT
Jan 21, 2010
I am using rsync to backup dirs on my ubuntu server onto a NAS (which is mounted onto the filesystem), but the problem is that it is constantly doing full backups rather than doing incrementals and I am not really sure why. After doing a bit of expermienting with the script I noticed that if I just backed up a home dir (/home/user) the incremental backups work fine. If however I was to back up a dir like (/home/domain/user) it always does full backups.I have tried various different scripts but still the same end result. The latest script is a variation on the a script found on the samba rsync examples webpage, see below...
#!/bin/bash
# rsyncbu.sh -- backup to nas using rsync
# This script backups files listed in BDIR to the BSERVER. The verbose output along with the date is listed in the LOG_FILE specified
# verbose output
[code]....
View 4 Replies
View Related
Jan 29, 2010
Can I use rsync for incremental backups of the running linux server?
View 5 Replies
View Related
Dec 2, 2010
With the --backup and --backup-dir= options on rsync, I can tell it another tree where to put files that are deleted or replaced. I'm hoping it fills out the tree with a replica of the original directory paths (at least for the files put there) or else it's a show stopper. What I'm wanting to find out applies when I'm restoring files. Assuming each time I run rsync (once a day) I make a new directory tree (named by the date) for the backup directory. For each file name/path in the tree, I would start with whatever is in the main tree (the rsync target) and work through the incremental trees going backwards until I reach the date of interest to restore to. If along the way I encounter a file in an incremental, I would replace the previous file at that path with this next one. So by the time I get back to a given date, I should have the version of the file which was present at that date. Do this for each file in the tree and it should be a full restore.
But ... and this is the hard part, it seems. What about files that did not exist at the intended restore date, but do exist (were created) on a date after the intended restore date. What I'd want for a correct restore would be for such files to be absent in the restored tree (just as they were absent in the source tree on that date). How can such a restore be done to correctly exclude these files? Wouldn't rsync have to store some kind of sentinel that indicates that on dates prior, the file did not exist. I suspect someone might suggest I just make a complete hard linked replica tree for each date, and this way absent files will clearly be absent. I can assure you this is completely impractical because I have actually done this before. I ended up with backup filesystems that have so many directories and nodes that it could take over a day, maybe even days, to just do something like "du -s" on it. I'm intending to keep daily changes for at least a couple years, if not more. So that means the 40 million plus files would be multiplied by over 700, making programs like "du -s" have to check over 28 BILLION file names (and that's assuming the number of files does not grow over the next two years).
View 2 Replies
View Related
Mar 20, 2010
I've been using dump/restore for backups, for quite some time. It's worked fine, but the process of recovering from a HD failure takes too long. What with eSATA and external drive docks, what I'd really like is to use rsync to maintain a current clone of my entire system drive. That is, start with a full disk clone, and then use rsync to keep it current.
I've seen plenty of instructions on how to do this with a directory tree, but I've seen none for doing it with a copy of the entire disk. If, for example, I copy /etc/fdisk, then the copied disk would have entries with the same UUIDs as the original disk. Which would mean that if the clone disk were to be bootable, its partitions would need the same UUIDs as the original disk. Which they would be, if the cloned disk started as a full-disk clone, I think. Am I wrong? But that means that when the clone disk was active, I'd have partitions with duplicated UUIDs. Is this going to cause problems? When I boot, will I get the correct partitions loaded?
View 4 Replies
View Related
Jan 22, 2010
one would have to exclude certain folders / directories but would the backup be possible if the system is up and running in its native "live" state ? Which directories could be excluded ? Does swap need to be turned off ? I would like to make incremental backups on a separate partition of the same hard drive. I will endeavour to backup the MBR/ Partition table using dd.
View 6 Replies
View Related
Nov 29, 2010
I've had several HDD crashes on my personal server over the years and it's just gotten to be a real pain in the rear. Crashed again this morning. Currently, I make monthly tarball backups of the entire filesystem using my script:
Code:
#!/bin/sh
# Removes the tarball from the previous execution.
rm -rf /backup/data/*.tar.gz
[code].....
View 13 Replies
View Related
Feb 3, 2011
I've been a DOS/Windows guy for 20 years, and recently became a SW test lab helper. My company uses CentOS for a lot, so I've become familiar with it, but obviously not as comfortable as I am with Windows.
Here's what I have planned:
machine: Core 2 Duo E8400, 8GB DDR2, 60GB SSD OS drive, ATI 4650 video card, other storage is flexible (I have 3 1TB drives and 4 750GB drives around that can be used in this machine.)
uses: HTPC, Network Storage, VMWare server host: SMTP, FTP server, and Web server virtual machines
I've figured out how to do much of this, but I haven't figured out how to do backups in Linux. I've been spoiled with Windows, with the built in backup system so simple to use. I find myself overwhelmed with the array of backup software, and unable to determine which to use. none of them seem to do everything I need them to do, but some come close, I think. I'm hoping someone here can help me out in figuring out which program to use and how to use it.
Here is what I need the backup software to do:
1. scheduled unattended backups, with alerts if the backups fail
2. a weekly full backup with incremental every 12 hours
3. removing the old backups when the new full backup runs, I would prefer to keep 2 weeks of backups, but that's not necessary
4. a GUI would be preferable, since my arthritic fingers don't always do as I want them to do. I typo things a lot, and the label worn off my backspace can attest to that.
View 7 Replies
View Related
May 9, 2010
I am currently backing up my data but find that it takes way to long to do a rsync, it takes forever to just find the differences and transfer them.Out of 3 separate rsyncs the main one that is slow is my www.skins.be mirror directory which is 41GB and has 392,200 files, sorted into multiple directories. Which grows by around 100 every couple days.I think that something that would be able to track changes by inotify time on directories will speed it up since Picasa sure finds the changes fast when I open it and it is tracking over 26,200 pictures. I just don't know of a backup solution that does that.
View 4 Replies
View Related
Apr 27, 2011
There are multiple servers to be backed up. Different access rights exist in each server. There are two backup servers with plenty of disk space, one local, and one offsite. The local one feeds to the offsite one. The rsync command is being used to make a replica of backed up data. Deleted data is also being archived. There are two methods that have been considered: One is to have the individual servers run rsync which logs in to the backup server to push data. Two is to have the backup server run rsync which logs in to each individual server to pull data. Because system data is involved and meta information (like owning user) must be stored, root is required to access the data as well as to store it. That means everything runs as root both ends. So method one was quickly dismissed because each server would effectively have rights to access ALL the data on the backup server since it logs into the backup server as root. The security containment here involves different groups using different servers, and they need to be isolated from each other.
But even method two involves some risks that are a concern. This means one machine has access rights to every server. If the backup server were compromised, every machine could be compromised.What I'd like to find is some way to allow backups to be run without either machine granting root access to the other, while still running as root, or something equivalent, that allows accessing all data and storing all metadata. So I was looking at setting up an rsync daemon on each individual server (running as root so it can access what it is specified to access), and running an rsync client on the backup server (as root so it can store metadata). This opens network access issues. Any user on the network can connect to the rsync daemon. So password protection is needed. But this communication is also not encrypted, which exposes the password and the data should the network be sniffed.
So now I'm thinking about a non-root ssh login between machines. The backup server would login to a non-privileged user on each individual server and set up a secure forwarding channel to the rsync daemon. Is this the best that can be done? Is there a way to run rsync via SSL with key verification so it can all be done together? I'd like to have the rsync daemons configured to always talk SSL, and always verify the client's key against a list of authorized keys, and likewise the client verify the server's key against the known public key for that server.
View 14 Replies
View Related
Apr 5, 2011
I am backing up my debian server with rsnapshot which actually uses rsync to perform the backup. The backups are located in an external storage of size 1.4T .
[code]....
I tried to understand what this error message means and i founde that error code 12 : 12 Error in rsync protocol data stream I understand that when rsync find that a file on the target was changed , it will send only the block/blocks that contain the changes and in the destination rsync will create new file and not update the old one (new inod...) . I want to know if this error i get is due to full disk or perhaps it is some other factor
View 2 Replies
View Related
Sep 30, 2010
I need to login as root, or at least get root privileges, in a cron triggered backup run. The straight way to do this would be the backup server making an ssh connection to the server to be backed up (this way because I want to avoid many servers being backed up in parallel and the backup server itself would be managing this diversity), via the rsync command which would be performing the backup's synchronization step.
I'm looking for alternatives to this in some form. I'd like to disallow direct root login to my ssh port (not 22One idea I have is to have the backup server initiate an ssh login as a non-root user, to either the actual source server, or to a server that can reach the source server ... and set up port forwarding. Over the forwarded port, then initiate the rsync that logs in as root via another port that allows direct root, but cannot be reached from the internet at all (because the border firewall doesn't include this port as allowed in).FYI, these logins will be using ssh keys, not passwords. I do need to keep ownership metadata for files being backed up, so this is why I am using root. Also, rsync is needed to get the incremental updates to keep bandwidth usage lower (otherwise I could just transfer a tarball each day).Anyone have any other ideas or comments, for security issues, based on experience doing things like this (backups, routine data replication, etc)?
View 5 Replies
View Related
Jul 13, 2011
I am using rsync for incremental backups. I am backing up to a second hard drive on my computer. When I check the individual backup directories (backup.0 through backup.4) with du -hs they each show 12G; when I check the parent directory squeeze it shows 15G. Over 4 backups I have added 3G. I haven't made very much for changes to directories I'm backing up and am using hard links. I have included some info below.
Quote:
Backup script:
#!/bin/bash
mount /mnt/backup
cd /mnt/backup/squeeze/
rm -rf backup.7
[code]....
View 2 Replies
View Related
May 9, 2011
So I am using rsync (3.0.7 on MAC OSX) to backup one hard drive to a folder on another one. The is USB drive to USB drive and I have done the initial backup from one drive to a new formatted other drive with the following command:
Code:
rsync -avX --progress /Volumes/Source /Volumes/Destination
This all appears to be going smoothly as I type. I am going to write a script to do subsequent backups in the
[code]....
View 2 Replies
View Related
Oct 6, 2010
I'm trying to set up rsync backups on my ReadyNAS and I'm getting the following error: ERROR: The remote path must start with a module name not a / This error is accompanied by the following information:
[Code]...
View 1 Replies
View Related
May 16, 2011
I am in the process of writing an rsync script to run unattended backups of my entire file system to another system located on my local network using ssh and password-less rsa keys.
I will absolutely will not use password-less keys with the root account and this is the limitation preventing me from accomplishing my goal because root is required by rsync to access the / tree and copy it to another location. I decided that if I compiled the script into a binary that I didn't have a problem with the password being contained within the binary itself but from what I've read there is no way to elevate to root and then back down to user level from within the script/binary.
I can create the script as the user and use chroot to make it owned by root but retain execution permission for the user but it will still cause the ssh login to be under root and therefore require either that I am there to enter my password or the use of password-less keys under the root account which I reiterate I will NOT do. Currently the script is executed by the user on the machine containing the files to be backed up.
View 9 Replies
View Related
Jun 7, 2011
I currently have a setup which allows me to connect to all computers on my home network via SSH and RSA keys. I'm very security-conscious, so all of my keys are passphrase protected. I'd like to essentially set something up where I'm running Unison on a cron job to back up to a file server on my network, which we'll call timmy. I've noticed that the first time I try to use a key on my Ubuntu laptop teeks, I get a dialog which pops up asking me to type in my key passphrase. I've heard that for servers needing to make automated backups like this that one should use ssh-agent to ask for the key passphrase on login/server start. How can I set this up on teeks?
I'd essentially like to have the following happen:When I boot and come into the OS, prompt visually for the passphrase as is done when I first use a key.If I SSH into this computer (as it's internet-facing) and I haven't provided the SSH passphrase yet, then prompt for it. (Sometimes, I might need to remotely reboot the machine over SSH, so I'll be SSH'ing into it after it reboots and I'd like to be able to authenticate the key without having to VNC in and do it manually.)
View 2 Replies
View Related
Mar 1, 2010
I have a small IT consulting business and I am finding many of my clients couldn't afford Microsoft solutions and thus, going without... So a revelation hit me that I could offer Linux solutions, its just that I'm not a Linux guru... So after much research and installations of nearly every latest Linux distro, I decided on ClearOS as a good option for my clients that just require a good File/Print server, firewall and VPN solution... Everything was going fine until I got to the point where I was trying to get it to do backups to a USB drive... ClearOS does not come with the ability to do such, so from further research I found that I could install Webmin to handle that task and that it would not break ClearOS... Great, its just that its not working... It says its working, but its not.
[Code]...
Next I hit the "Save and Backup Now" button to test it out... all says it goes good... But when I check the device, there's nothing saved to it...
So I create another "Filesystem backup", I select the FlexShare directory that I want backed up and this time I select a local directory for the backup to go to... I hit the "Save and Backup Now" button to test it out. all says it goes good... But once again when I check the directory, there's nothing saved to it.
View 1 Replies
View Related
Mar 17, 2011
I currently use cp to backup data. I prefer it over rsync. I use the -b switch to make a backup of data and recently found you can use --backup=t to create numbered backups.Using --backup=t means that I could end up having 100 versions of a file if I change it 100 times. With the -b switch I will only ever have 2 versions. Is it possible to limit the numbered backups to 5 for example? So I would only ever have 5 versions?
View 2 Replies
View Related
Mar 12, 2011
I have just purchased a 1 TB external hard disk to be used for backups. The backups will be performed with rsync and since I don't really care about accessing the data from other operating systems, I think I'll use ext3 on the partition. I'll just be backing up my home directory and probably /etc as well. In my home directory, I have a small number of files that are several GB, but most are tens of MB in size or less.
I'm just wondering if there are any special options I should pass when I create the filesystem with mkfs.ext3.
View 6 Replies
View Related
Mar 3, 2010
I have a small IT consulting business and I am finding many of my clients couldn't afford Microsoft solutions and thus, going without... So a revelation hit me that I could offer Linux solutions, its just that I'm not a Linux guru... So after much research and installations of nearly every latest Linux distro, I decided on ClearOS as a good option for my clients that just require a good File/Print server, firewall and VPN solution... Everything was going fine until I got to the point where I was trying to get it to do backups to a USB drive... ClearOS does not come with the ability to do such, so from further research I found that I could install Webmin to handle that task and that it would not break ClearOS... Great, its just that its not working... It says its working, but its not...
So here's what I got, ClearOS 5.1 with Webmin 1.5
Here's what I have done to try to make it work after a good installation...
Under "Hardware" and "Partitions on Local Disks", it shows the USB drive as Device B... So I create a partion called "/dev/sdb1" Great, I'm thinking...
So I go to "System" and under "Filesystem backup", I select the FlexShare directory that I want backed up and then I select "/dev/sdb1" as the backup to device...
Next I hit the "Save and Backup Now" button to test it out... all says it goes good... But when I check the device, there's nothing saved to it...
So I create another "Filesystem backup", I select the FlexShare directory that I want backed up and this time I select a local directory for the backup to go to... I hit the "Save and Backup Now" button to test it out... all says it goes good... But once again when I check the directory, there's nothing saved to it...
View 3 Replies
View Related
Nov 19, 2010
I am a Mac user since 1988. I recently discovered Linux Ubuntu and love it. So much that I use it about 95% of the time. On the Mac there is an application I use called Superduper which makes a bootable backup to an external USB drive.
Can I do the same kind of thing using the dd command? I use the excellent Cron GUI Scheduled Tasks. I was hoping that maybe I can use that to schedule nightly bootable backups. Is dd the right tool to use? Once the initial backup is done (which I understand can take a long time), does dd do incremental backups after that.
Looking forward to how I can set this up so that I can just set and forget reassured that bootable backups are occurring overnight.
View 13 Replies
View Related
Dec 19, 2010
I have inherited a network server running Redhat Enterprise (4?) which uses an external USB drive for backups. These have been scheduled to run at midnight each night.I want to use 2 external drives to hold the backups (exchanging the usb drive each day). My question is: Is there anything I need to do to a new USB drive before exchanging the drives (eg. formatting etc) or can I simply just plug the new one in and let it run?I apologise for the very basic nature of this question but I have no clue about Linux.
View 6 Replies
View Related
Apr 14, 2010
I'd like to create backups from some copy protected DVD's, for my private use only.
Does it work this way to circumvent the copy protection mechanisms?
Code:
# dd if=/dev/dvd of=dvd.iso
and then burn dvd.iso on an empty DVD.
View 3 Replies
View Related
Jul 29, 2010
Just trying to set up a new backup using tar, but there are a few things I dont want to include. Using --exclude I can exclude sub-directories, but how do I exclude specific files in a subdirectory that are (for instance) executables or have a specific extension?
View 5 Replies
View Related
Mar 20, 2010
What do I need to add to my .emacs to get it to save all of my autosaves and backups into one directory? I don't do a lot of .emacs configuration, and I just can't get the variables out of emacsWiki to play right. Anybody mind sharing how they do it? I would prefer to have the saves placed in /tmp/emacs/{username}/{autosaves | backups}
BONUS, configuration to do the same for TRAMP
View 2 Replies
View Related
Nov 8, 2010
I am getting the databases from mysql and my database name is username_something.
I am getting the username and then puting the respective backups in corresponding folders like
tar bala bla /backups/sql/username/username_something.tar.sql.gz
The problem is system worrks if i have the folder username already there but for new databases if get the error like unknown file path.
How can i do that if username folder is not there it should be created
View 2 Replies
View Related
Jan 25, 2010
I have a script that moves files from one directory to another, based on the numeric date of the file name (i.e., 20091212 would go to the December directory). Now, since this script will be ran at the beginning of the following month (December's files to be tarred and gzipped will be performed the first day of January, and January's in February, etc.), it appears to me that the script will be tricky when it comes time to do the December files in January.
Here's part of the script:
Code:
# Define working directory and target directory.
DIR=/var/log
target=$DIR/month
hostname='uname -n' .....
I can't seem to figure out a way to carry the output of the date command to the next command, and the year for the December files will always be wrong.
View 2 Replies
View Related
Feb 26, 2010
this is my structure:
[root@ iso]# fdisk -l
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
[Code]....
And I want to restore some files from /dev/VolGroup00/LogVol00.
View 14 Replies
View Related
May 12, 2010
I've run into a problem when trying to restore a file using AMANDA. My setup is as follows: I have a server that runs on Red Hat Enterprise Linux Server release 5.4 and I have a client that runs on Red Hat Enterprise Linux AS release 4. I successfully backed up files on the client using AMANDA.
Now I am trying to restore a file from the backup and restore it to the client running Red Hat Enterprise Linux AS release 4. In order to do this, I have to go through the server which runs Red Hat Enterprise Linux Server release 5.4.
Here is the problem I encounter. When I try the restore, it freezes at the extraction process. I've left it running for the night and still it is stuck at the same place. The restore never completes.
So I'm wondering if anyone knows what seems to be the issue? Could it be that the AMANDA driver I installed does not allow restore with different versions of Red Hat? Or is there an update in Red Hat 4 that would fix this issue?
View 7 Replies
View Related