Ubuntu Servers :: Rsync - Backup Only Changed Blocks?
Aug 10, 2010
I'm going to make a nightly backup copy from one server to another, using rsync. If I have a sufficiently large file, say 4+ GB or so, I'm not interested in copying the whole file if only a small change has been made. Can rsync detect small changes on block level and backup only those if needed?
how rsync will handle disk images. Will rsync copy only the changed blocks of a vhd or a lun? This is what I've been told, but wouldn't this require overlaying a filesystem on the vhd? How would rsync handle copying a 500GB lun?
I have a Linux host acting as an ISCSI server for a Windows box. I want to keep an off site backup, so I figure rsync will keep the ISCSI server synced with an offsite Linux host. I understand that Rsync does block level incremental transfer to conserve bandwidth ok, awesome.The trick is, that I also want an archival copy kept. Say I want to go back to a revision of a file from 10 days ago, I need to be able to do that.
I was planning on using Backup Exec, since we currently have a licensed copy. Throw the archives from Backup Exec onto the ISCSI server as well, and have it keep a rotating 30 day backup, or something like that. The issue I see here is that this will be creating a deleting files as it does its daily backup rotation. I'm guessing RSYNC will see these as new files, and likely retransmit everything on a daily basis. The question then becomes, is this assumption correct, or will it still know to do a block level incremental transfer even when file names and such are changing?
I support a small business which has an Ubuntu server running as a file server. The server is running Ubuntu 10.4. There is one hard drive which is mounted as /media/hdd. Each night this is backed up to an external USB hard drive mounted as /media/backup. The backup is carried out using the command:
Code: rsync -av /media/hdd/ /media/backup/
Is there a way to encrypt this back-up so that if the USB hard drive is plugged into another machine it cannot be read?
I have setup Rsync as a daemon on a Ubuntu 10.04 box and the setup was successful. Here are my configs
Code: root@hurricane:`# nano /etc/default/rsync # defaults file for rsync daemon mode # start rsync in daemon mode from init.d script? # only allowed values are "true", "false", and "inetd"
I wrote a script to wake up my windows machine and do an rsync backup of some of my files. I wanted to make this command a accessible through local bin so I made it executable. However the problem is that when I copies files is copies them with root permissions and i can edit or delete them. How can I set the files so they transfer with the proper permissions for my Ubuntu user?
Code: #!/bin/bash # Description: This script first wakes up the client machine and syncs the appropriate folders. # Finally the script shuts down the client if it was off to begin with. if [ "$(whoami)" != "root" ]; then echo "Permission Denied" exit 1 fi .....
Thought I'd post it here because it's more server related than desktop... I have a script that does:
[Code]....
This is used to sync my local development snapshot with the live web server. There has to be a more compact way of doing this? Can I combine some of the rsyncs? Can I make the rsync set or keep the user and group affiliations? Can I exclude .* yet include .htaccess?
During my backups I'm finding that rsync is copying all files, instead of just what's changed.
I'm rsyncing between 2 USB external hard drives. One hard drive is FAT32 and one is NTFS. I've examined some of the files and believe that the difference is that there's a 1-second modtime difference developing in some of the files somehow.
Here's an example. These duplicity files were synced from /media/BACKUPHD (the NTFS drive) to /media/VIDEOHD (the FAT32 drive) only a few hours ago this morning. They have not been touched or changed since then, but that 1-second difference in their time stamps has appeared:
Code: tim@localhost:~> stat /media/BACKUPHD/backups/duplicity/duplicity-full.20110107T145955Z.vol10.difftar.gpg File: `/media/BACKUPHD/backups/duplicity/duplicity-full.20110107T145955Z.vol10.difftar.gpg'
I'm using rsync to create a mirror of the data files on our main server every day. I've looked at the man page, and can't see it; can I get a listing of the files that have been changed on or added to the mirror when it's completed? Can it just log what it's doing to a file?
I'm configuring an rsync between 2 machines, A_Server --> B_Server, using the following script:
Code: #!/bin/bash # # Script de backup a trav�s de Rsync desde RMP-1 hasta RMP-2. #
[Code]....
The Rsyn is working OK. What i need is to change one of the lines of the /tmp/prueba.txt before sending it to the remote machine (obviously not changing the file in the local machine), i mean, send prueba.txt to the remote machine deleting one line and adding another one...how can i do this?
I've got quite a decent rsync script setup, however I'd like to invoke it whenever there's change to a file. My initial idea was to use find, however this has two major flaws - the first being my particular unix veriant cant understand -print0 which means this doesn't work, the second is that I'm not 100% sure how to put variables into quotation marks so ls can understand the target:
Code:
for i in `find /shares/ -mtime -1 -print`; do ls -ltr $i;done
I saw in a magazine reference to using rsync to have identical copies of folders. This looks like something I could find useful as I have a large number of items in need of safe backup.
I have the folders on an old system on a home network and would like to copy these over to a USB Hard Drive.
Currently the folders reside on SFTP xxx.xxx.xxx.xxx and I wish to sync them to a USB port on my laptop.
I have a samba share to a windows 7 computer I do not know if I will be able to use backintime or not so I want to know how to have rsync do my backup.I read the man but I'm not sure if I understand the it.on same computer different hard drive to run every hour in a script. Leanne is windows 7 share and backup is the other hard drive in the computer rsync -arvRzEP /media/leanne /media/backup.
I'm trying to learn how rsync works to backup my system. I tried: Code: rsync -azvv /home /media/Elements I get a folder called home on my external hard drive but when I use ls -l to see the permissions they are all wrong. On my /home folder the permissions for /nathan are drwxr-xr-x 48 nathan nathan The permissions on the backup /nathan folder are drwx------ 1 nathan nathan
I also tried using the long version of -a which is -rlptgoD and that didn't work either. What do the 48 and 1 mean when I used ls -l? When I look in the /nathan folder the permissions are all screwed up too. A lot of the files are backed up as executable and the permissions are all screwed up. I also ran it with sudo, and that didn't work either. The permissions were still screwed up and ownership is messed up too.
This should be a quick one. I'm trying to backup a single directory and it's subdirectories on my Lucid Server to a freenas box across my network. This is what I'm using to do that rsync -r -a -v -z * --delete freenas: DSIBackups..It almost works perfectly except for one problem. When a file is deleted at the source, this command doesn't seem to delete it on the receiving end. I assumed that the --delete would do that but aparently not.
when rsync is finished the update, or in the meantime - i need to move the updated files to a different location - like date +%Y%m%d something or what ..the reason is, because of the development, i need the modified files, but all of them, not just the last one - so i have to store them daily, but i dont want to store the whole dir - just that few files which are updated does it make sense?
I'm hoping somebody can find something here that I haven't. I'm trying to use rsync to backup home directories to a nas. First, I NFS mounted the nas and ran an rsync and everything worked out fine. the transfer completed after a few hours and everyting was transferred (lots of stuff!). I then decided that I don't want to leave the nas mounted all the time and I didn't want to automate mounting and unmounting of the nas as I didn't think I could produce a script that would work reliably enough. So I decided to start an rsync daemon on the nas and upgrade via that. I run the following command (results are included. the ^C is me killing it after it hangs).
well, i know ther are issues when using rsync to copy files to ntfs partition like file permission blah blah. the thing is, i need to backup my music files periodically onto a ntfs partition from ext4. i really dont care about file permissions or any other stuff. when i use rsync, it should update the mp3 files on my ntfs (external) disc with the new ones.can i give a go with this operation? i have lot more important files on the external disc and i dont want this rsync corrupt or delete those files coz they are highly important files.
Our backup script was working fine (ssh to the server, back up /home to a second hard drive on my computer). Then right after an ubuntu update, it quit working. I investigated and found that "something" had changed the label on the backup hdd to what looked like gibberish to me. But the script identified the backup hdd by its uuid, which didn't change. Yet, here is the error I get when the backup fails: receiving file list ... done [took about 5 seconds] rsync: mkdir "/media/14D9-3B1F/server-backup" failed: No such file or directory (2) rsync error: error in file IO (code 11) at main.c(594) [receiver=3.0.6]
Note that the backup hdd IS mounted, uuid is correct, and the folder 'server-backup' DOES exist. Does anyone have a clue for me? I'm moderately experienced in Linux and ubuntu. Our server runs centos 5. And as stated, the backup ran fine for several weeks. I think there was a new linux kernel on that update, but at this point a while later I don't know which one. Current kernel is .2.6.31-22-generic.
2. I run the application & it creates a list of all files (size & time-stamp) without actually storing them. Let's call this the "snapshot list".
3. I update some of the files on the laptop.
4. Now I run the application & it only copies the files which have changed on the laptop, that have different size/time-stamp from the snapshot list, onto some external media, such as a memory card. Of course, the files should be copied onto their proper location in the directory tree & not just pile up in one place.
Why is this useful? although the laptop has a 200GB HD I typically only update a small number of files, whose total size is maybe 10MB or so. If I could only backup those which have changed, I could do this with a tiny SD card instead of lugging around an external usb HD.
I switched last summer from Windows (used it since Windows 95) to Debian. I'm using Debian Jessie for a couple of months now and I'm getting used a little.
There are problems here and there, but I can solved them with some reading on the web. Not really a big problem...till now
I run Debian 8.2 om my PC (PC1). Bought an older PC (PC2) that I want to use as a backup server.
I'm using PC2 only for making backups, after the backup I switch it off again.
So I installed Debian 8.2 (net-install without DE and with SSH) on PC2 and tried to configure it to let it work as my backup location. Made a public SSH key and exported it to the root account (no problem) and to the user account (sensdeb), but there was an error "Access Denied"
Gave the user (sensdeb) sudo-rights via visudo file
# User privilege specification root ALL=(ALL:ALL) ALL sensdeb ALL=(ALL:ALL) ALL
I installed rsync.
The problem is that Rsync only works when I use the root account.
I don know how to give the user sensdeb the rights so that I can use that account for my backup tasks. Now it's possible to sync with the root account, but that should not be the way to do it, I read many times.
I want to save a backup of my data on a remote server, but never want the backup server to see the data unencrypted. Editing a single file and backing up should not result in everything being encrypted and sent again. The remote server should preferably not even know the directory structure (and especially not the directory names).
I've been trying to make a three stage backup with stage 0 being a full monthly back up, stage 1 being a weekly backup, and stage 2 being a daily backup. I've been trying very hard to use rsync for this but sorting files by date is proving to be problematic. Sometimes it seems to work from the command line directly, but the same command causes errors and warnings from a script while entirely failing to sort the correct files.
The common example I see for this involves commands like this:
Code: rsync -Rav `find /home/ -ctime -7 -print` /path/to/home_backup The problem seems to be that since the user directories in /home contain files that have been altered within the time frame specified the whole directory is matched first which means that the whole directory is recursively archived as opposed to just the changed files.
I've also seen examples using the --files-from tag using the same find parameters and this one seems to ALMOST work but gives me strange warnings and fails to run at all when launched from a script.
Many of the things I've googled about using rsync to backup stuff by the date modified involves a rather snarky "You're missing the point of rsync!" to which I respond by yelling at my computer monitor followed by "JUST TELL ME WHAT I NEED TO KNOW!" I understand that rsync is meant to take care of incremental backups on it's own, that's why I want to use it specifically for a traditional 3 stage backup scheme.
To copy from production to standby over the internet I use a cron job doing rsync -avze 'ssh -p 8022' --exclude-from= ....
My question is: should the cron job run on the production or the standby system. Root access to the remote system is given by a pass phrase-less ssh key. Currently I run rsync on the production system. I guess that it is more secure because the standby needs no ssh login to production. Running rsync on the standby would use less resources on production. I am concerned that in this case there would be pass phrase-less access from standby to production.
I'm doing an rsync backup to an external drive in order to take a shot at setting up partition encryption. My rsync command is, as root: Code: rsync -av / /external1/backup.Once I've finished my cryptsetup and done a fresh Linux install, what command should I use to properly restore my backup (without messing up the encryption setup)?
rsync: link_stat "/av" failed: No such file or directory (2) skipping directory home rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1060) [sender=3.0.7]
So I am using rsync (3.0.7 on MAC OSX) to backup one hard drive to a folder on another one. The is USB drive to USB drive and I have done the initial backup from one drive to a new formatted other drive with the following command:
Code: rsync -avX --progress /Volumes/Source /Volumes/Destination This all appears to be going smoothly as I type. I am going to write a script to do subsequent backups in the
I want to backup windows PC's in my network to my ubuntu 11.04 pc, using rsync. Rsync is working, but I have to mount the pc's. A few details. My server is named: server The windows pc is named: \PC_OF_MARTIJN The folder where the mount is coming is: /home/bastiaan/backup/mounts Credentials are in /home/bastiaan/backup/credentials and they're called: martijn
So what I'm going to add to /etc/fstab is this: Code: //server \PC_OF_USER /home/bastiaan/backup/mounts/user cifs credentials=/home/bastiaan/credentials/user,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 Will this work?