Ubuntu :: Rsync For Initial Backup Then Maintenance Backups?
May 9, 2011
So I am using rsync (3.0.7 on MAC OSX) to backup one hard drive to a folder on another one. The is USB drive to USB drive and I have done the initial backup from one drive to a new formatted other drive with the following command:
Code:
rsync -avX --progress /Volumes/Source /Volumes/Destination
This all appears to be going smoothly as I type. I am going to write a script to do subsequent backups in the
[code]....
View 2 Replies
ADVERTISEMENT
Nov 12, 2009
How do you get Rsync to do incremental backups rather than full backups? At the moment I have a script that will create a backup folder (if it doesnt already exist) then copy the source files into the backup directory with the command
rsync $VERBOSE --exclude=$TARGET/ $EXCLUDE --exclude '/Ls-wtgl1c8/**' -rt --delete $source/ $TARGET/$source/ >> $LOG_FILE
Target is where the files will be backed up to Sources is the dir(s) to be backed up Exclude files is the list of files not to backup
log file is where the output will be saved to. At the moment it only does full backups, but I would only like to do incremental, how would this be achieved? Am I missing out an option in the Rsync that is required.
View 9 Replies
View Related
Feb 22, 2011
I decided to create a file server for my family. I have set up a RAID 6 (4 disk) array. My thoughts were to back up the array to a hard drive monthly. Store the drive in a WiebeTech Drivebox, off site, in a "fire proof" box. (The kind for papers sold at Staples or Office Depot. After a year, I would have 12 back-ups. I would then overwrite the previous hard drive. (i.e. HDD from March 2011 would be overwritten on March 2012.)
Additionally, I was wondering if there was recommended maintenance to verify the array is working properly. Right now, I am moving data to the array so quickly that I am backing up every few days between three hard drives. (Back-up #4 was written to Drive #1 after Drive #1 was reformatted.) I am aware that I could use rsync. (Which I currently use for backing up my portable USB HDD to the RAID array.)
View 1 Replies
View Related
May 9, 2010
I am currently backing up my data but find that it takes way to long to do a rsync, it takes forever to just find the differences and transfer them.Out of 3 separate rsyncs the main one that is slow is my www.skins.be mirror directory which is 41GB and has 392,200 files, sorted into multiple directories. Which grows by around 100 every couple days.I think that something that would be able to track changes by inotify time on directories will speed it up since Picasa sure finds the changes fast when I open it and it is tracking over 26,200 pictures. I just don't know of a backup solution that does that.
View 4 Replies
View Related
Jan 21, 2010
I am using rsync to backup dirs on my ubuntu server onto a NAS (which is mounted onto the filesystem), but the problem is that it is constantly doing full backups rather than doing incrementals and I am not really sure why. After doing a bit of expermienting with the script I noticed that if I just backed up a home dir (/home/user) the incremental backups work fine. If however I was to back up a dir like (/home/domain/user) it always does full backups.I have tried various different scripts but still the same end result. The latest script is a variation on the a script found on the samba rsync examples webpage, see below...
#!/bin/bash
# rsyncbu.sh -- backup to nas using rsync
# This script backups files listed in BDIR to the BSERVER. The verbose output along with the date is listed in the LOG_FILE specified
# verbose output
[code]....
View 4 Replies
View Related
Apr 27, 2011
There are multiple servers to be backed up. Different access rights exist in each server. There are two backup servers with plenty of disk space, one local, and one offsite. The local one feeds to the offsite one. The rsync command is being used to make a replica of backed up data. Deleted data is also being archived. There are two methods that have been considered: One is to have the individual servers run rsync which logs in to the backup server to push data. Two is to have the backup server run rsync which logs in to each individual server to pull data. Because system data is involved and meta information (like owning user) must be stored, root is required to access the data as well as to store it. That means everything runs as root both ends. So method one was quickly dismissed because each server would effectively have rights to access ALL the data on the backup server since it logs into the backup server as root. The security containment here involves different groups using different servers, and they need to be isolated from each other.
But even method two involves some risks that are a concern. This means one machine has access rights to every server. If the backup server were compromised, every machine could be compromised.What I'd like to find is some way to allow backups to be run without either machine granting root access to the other, while still running as root, or something equivalent, that allows accessing all data and storing all metadata. So I was looking at setting up an rsync daemon on each individual server (running as root so it can access what it is specified to access), and running an rsync client on the backup server (as root so it can store metadata). This opens network access issues. Any user on the network can connect to the rsync daemon. So password protection is needed. But this communication is also not encrypted, which exposes the password and the data should the network be sniffed.
So now I'm thinking about a non-root ssh login between machines. The backup server would login to a non-privileged user on each individual server and set up a secure forwarding channel to the rsync daemon. Is this the best that can be done? Is there a way to run rsync via SSL with key verification so it can all be done together? I'd like to have the rsync daemons configured to always talk SSL, and always verify the client's key against a list of authorized keys, and likewise the client verify the server's key against the known public key for that server.
View 14 Replies
View Related
Jan 29, 2010
Can I use rsync for incremental backups of the running linux server?
View 5 Replies
View Related
Apr 5, 2011
I am backing up my debian server with rsnapshot which actually uses rsync to perform the backup. The backups are located in an external storage of size 1.4T .
[code]....
I tried to understand what this error message means and i founde that error code 12 : 12 Error in rsync protocol data stream I understand that when rsync find that a file on the target was changed , it will send only the block/blocks that contain the changes and in the destination rsync will create new file and not update the old one (new inod...) . I want to know if this error i get is due to full disk or perhaps it is some other factor
View 2 Replies
View Related
Sep 30, 2010
I need to login as root, or at least get root privileges, in a cron triggered backup run. The straight way to do this would be the backup server making an ssh connection to the server to be backed up (this way because I want to avoid many servers being backed up in parallel and the backup server itself would be managing this diversity), via the rsync command which would be performing the backup's synchronization step.
I'm looking for alternatives to this in some form. I'd like to disallow direct root login to my ssh port (not 22One idea I have is to have the backup server initiate an ssh login as a non-root user, to either the actual source server, or to a server that can reach the source server ... and set up port forwarding. Over the forwarded port, then initiate the rsync that logs in as root via another port that allows direct root, but cannot be reached from the internet at all (because the border firewall doesn't include this port as allowed in).FYI, these logins will be using ssh keys, not passwords. I do need to keep ownership metadata for files being backed up, so this is why I am using root. Also, rsync is needed to get the incremental updates to keep bandwidth usage lower (otherwise I could just transfer a tarball each day).Anyone have any other ideas or comments, for security issues, based on experience doing things like this (backups, routine data replication, etc)?
View 5 Replies
View Related
Jul 13, 2011
I am using rsync for incremental backups. I am backing up to a second hard drive on my computer. When I check the individual backup directories (backup.0 through backup.4) with du -hs they each show 12G; when I check the parent directory squeeze it shows 15G. Over 4 backups I have added 3G. I haven't made very much for changes to directories I'm backing up and am using hard links. I have included some info below.
Quote:
Backup script:
#!/bin/bash
mount /mnt/backup
cd /mnt/backup/squeeze/
rm -rf backup.7
[code]....
View 2 Replies
View Related
Dec 2, 2010
With the --backup and --backup-dir= options on rsync, I can tell it another tree where to put files that are deleted or replaced. I'm hoping it fills out the tree with a replica of the original directory paths (at least for the files put there) or else it's a show stopper. What I'm wanting to find out applies when I'm restoring files. Assuming each time I run rsync (once a day) I make a new directory tree (named by the date) for the backup directory. For each file name/path in the tree, I would start with whatever is in the main tree (the rsync target) and work through the incremental trees going backwards until I reach the date of interest to restore to. If along the way I encounter a file in an incremental, I would replace the previous file at that path with this next one. So by the time I get back to a given date, I should have the version of the file which was present at that date. Do this for each file in the tree and it should be a full restore.
But ... and this is the hard part, it seems. What about files that did not exist at the intended restore date, but do exist (were created) on a date after the intended restore date. What I'd want for a correct restore would be for such files to be absent in the restored tree (just as they were absent in the source tree on that date). How can such a restore be done to correctly exclude these files? Wouldn't rsync have to store some kind of sentinel that indicates that on dates prior, the file did not exist. I suspect someone might suggest I just make a complete hard linked replica tree for each date, and this way absent files will clearly be absent. I can assure you this is completely impractical because I have actually done this before. I ended up with backup filesystems that have so many directories and nodes that it could take over a day, maybe even days, to just do something like "du -s" on it. I'm intending to keep daily changes for at least a couple years, if not more. So that means the 40 million plus files would be multiplied by over 700, making programs like "du -s" have to check over 28 BILLION file names (and that's assuming the number of files does not grow over the next two years).
View 2 Replies
View Related
Oct 6, 2010
I'm trying to set up rsync backups on my ReadyNAS and I'm getting the following error: ERROR: The remote path must start with a module name not a / This error is accompanied by the following information:
[Code]...
View 1 Replies
View Related
Mar 20, 2010
I've been using dump/restore for backups, for quite some time. It's worked fine, but the process of recovering from a HD failure takes too long. What with eSATA and external drive docks, what I'd really like is to use rsync to maintain a current clone of my entire system drive. That is, start with a full disk clone, and then use rsync to keep it current.
I've seen plenty of instructions on how to do this with a directory tree, but I've seen none for doing it with a copy of the entire disk. If, for example, I copy /etc/fdisk, then the copied disk would have entries with the same UUIDs as the original disk. Which would mean that if the clone disk were to be bootable, its partitions would need the same UUIDs as the original disk. Which they would be, if the cloned disk started as a full-disk clone, I think. Am I wrong? But that means that when the clone disk was active, I'd have partitions with duplicated UUIDs. Is this going to cause problems? When I boot, will I get the correct partitions loaded?
View 4 Replies
View Related
May 16, 2011
I am in the process of writing an rsync script to run unattended backups of my entire file system to another system located on my local network using ssh and password-less rsa keys.
I will absolutely will not use password-less keys with the root account and this is the limitation preventing me from accomplishing my goal because root is required by rsync to access the / tree and copy it to another location. I decided that if I compiled the script into a binary that I didn't have a problem with the password being contained within the binary itself but from what I've read there is no way to elevate to root and then back down to user level from within the script/binary.
I can create the script as the user and use chroot to make it owned by root but retain execution permission for the user but it will still cause the ssh login to be under root and therefore require either that I am there to enter my password or the use of password-less keys under the root account which I reiterate I will NOT do. Currently the script is executed by the user on the machine containing the files to be backed up.
View 9 Replies
View Related
May 24, 2010
I'm looking for a standalone backup manager with the following properties:
1) Easy scheduling.
2) Automatic encryption of backups
3) Ability to remove old backups based on total size, not just backup age. (to avoid overfilling backup media)
4) Since this is going onto a business-critical machine used by a techno-peasant, it needs to have a snazzy, graphical interface for easy monitoring and configuration.
I am sure I *could* write this myself, but I find it hard to believe that there isn't one out there already, and I am lazy. Unfortunately, there are also a very large number of backup programs out there with less than complete descriptions and I am getting tired of installing each in turn to see what it does.Has anyone stumbled across something like what I just described?
View 2 Replies
View Related
Feb 26, 2010
this is my structure:
[root@ iso]# fdisk -l
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
[Code]....
And I want to restore some files from /dev/VolGroup00/LogVol00.
View 14 Replies
View Related
Jan 1, 2011
I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it.
The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk.
The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes.
I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes.
[Code]...
View 1 Replies
View Related
May 4, 2010
I notice when Linux boots in maintenance mode the filesystem is read-only.
Is there a way to change this, perhaps remounting as writable?
An example of this being a problem is that I was unable to open vi because there were too many session files....
Not to mention it would be nice to actually fix problems....
What are you meant to be able to do if you can't make any changes to the filesystem? What kind of maintenance can be expected?
View 1 Replies
View Related
Mar 2, 2010
I saw in a magazine reference to using rsync to have identical copies of folders. This looks like something I could find useful as I have a large number of items in need of safe backup.
I have the folders on an old system on a home network and would like to copy these over to a USB Hard Drive.
Currently the folders reside on SFTP xxx.xxx.xxx.xxx and I wish to sync them to a USB port on my laptop.
View 9 Replies
View Related
Aug 2, 2010
I have a samba share to a windows 7 computer I do not know if I will be able to use backintime or not so I want to know how to have rsync do my backup.I read the man but I'm not sure if I understand the it.on same computer different hard drive to run every hour in a script. Leanne is windows 7 share and backup is the other hard drive in the computer rsync -arvRzEP /media/leanne /media/backup.
View 1 Replies
View Related
Jun 18, 2010
I'm trying to learn how rsync works to backup my system. I tried:
Code:
rsync -azvv /home /media/Elements
I get a folder called home on my external hard drive but when I use ls -l to see the permissions they are all wrong.
On my /home folder the permissions for /nathan are
drwxr-xr-x 48 nathan nathan
The permissions on the backup /nathan folder are
drwx------ 1 nathan nathan
I also tried using the long version of -a which is -rlptgoD and that didn't work either. What do the 48 and 1 mean when I used ls -l? When I look in the /nathan folder the permissions are all screwed up too. A lot of the files are backed up as executable and the permissions are all screwed up. I also ran it with sudo, and that didn't work either. The permissions were still screwed up and ownership is messed up too.
View 3 Replies
View Related
Jul 20, 2010
This should be a quick one. I'm trying to backup a single directory and it's subdirectories on my Lucid Server to a freenas box across my network. This is what I'm using to do that rsync -r -a -v -z * --delete freenas: DSIBackups..It almost works perfectly except for one problem. When a file is deleted at the source, this command doesn't seem to delete it on the receiving end. I assumed that the --delete would do that but aparently not.
View 1 Replies
View Related
Oct 20, 2010
when rsync is finished the update, or in the meantime - i need to move the updated files to a different location - like date +%Y%m%d something or what ..the reason is, because of the development, i need the modified files, but all of them, not just the last one - so i have to store them daily, but i dont want to store the whole dir - just that few files which are updated does it make sense?
View 5 Replies
View Related
Apr 14, 2010
I'm hoping somebody can find something here that I haven't. I'm trying to use rsync to backup home directories to a nas. First, I NFS mounted the nas and ran an rsync and everything worked out fine. the transfer completed after a few hours and everyting was transferred (lots of stuff!). I then decided that I don't want to leave the nas mounted all the time and I didn't want to automate mounting and unmounting of the nas as I didn't think I could produce a script that would work reliably enough. So I decided to start an rsync daemon on the nas and upgrade via that. I run the following command (results are included. the ^C is me killing it after it hangs).
Code:
ryan@server:/etc/backup$ sudo rsync -ax --stats --progress --delete /data root@192.168.0.98:backups1
root@192.168.0.98's password:
sending incremental file list
data/home/user/Documents/
data/home/user/Documents/The File.wmv
[Code]...
View 3 Replies
View Related
May 31, 2010
well, i know ther are issues when using rsync to copy files to ntfs partition like file permission blah blah. the thing is, i need to backup my music files periodically onto a ntfs partition from ext4. i really dont care about file permissions or any other stuff. when i use rsync, it should update the mp3 files on my ntfs (external) disc with the new ones.can i give a go with this operation? i have lot more important files on the external disc and i dont want this rsync corrupt or delete those files coz they are highly important files.
View 2 Replies
View Related
Sep 15, 2010
I have a Linux host acting as an ISCSI server for a Windows box. I want to keep an off site backup, so I figure rsync will keep the ISCSI server synced with an offsite Linux host. I understand that Rsync does block level incremental transfer to conserve bandwidth ok, awesome.The trick is, that I also want an archival copy kept. Say I want to go back to a revision of a file from 10 days ago, I need to be able to do that.
I was planning on using Backup Exec, since we currently have a licensed copy. Throw the archives from Backup Exec onto the ISCSI server as well, and have it keep a rotating 30 day backup, or something like that. The issue I see here is that this will be creating a deleting files as it does its daily backup rotation. I'm guessing RSYNC will see these as new files, and likely retransmit everything on a daily basis. The question then becomes, is this assumption correct, or will it still know to do a block level incremental transfer even when file names and such are changing?
View 5 Replies
View Related
Jan 29, 2011
Our backup script was working fine (ssh to the server, back up /home to a second hard drive on my computer). Then right after an ubuntu update, it quit working. I investigated and found that "something" had changed the label on the backup hdd to what looked like gibberish to me. But the script identified the backup hdd by its uuid, which didn't change. Yet, here is the error I get when the backup fails: receiving file list ... done [took about 5 seconds] rsync: mkdir "/media/14D9-3B1F/server-backup" failed: No such file or directory (2) rsync error: error in file IO (code 11) at main.c(594) [receiver=3.0.6]
Note that the backup hdd IS mounted, uuid is correct, and the folder 'server-backup' DOES exist. Does anyone have a clue for me? I'm moderately experienced in Linux and ubuntu. Our server runs centos 5. And as stated, the backup ran fine for several weeks. I think there was a new linux kernel on that update, but at this point a while later I don't know which one. Current kernel is .2.6.31-22-generic.
View 1 Replies
View Related
Feb 4, 2011
I support a small business which has an Ubuntu server running as a file server. The server is running Ubuntu 10.4. There is one hard drive which is mounted as /media/hdd. Each night this is backed up to an external USB hard drive mounted as /media/backup. The backup is carried out using the command:
Code:
rsync -av /media/hdd/ /media/backup/
Is there a way to encrypt this back-up so that if the USB hard drive is plugged into another machine it cannot be read?
View 5 Replies
View Related
Jul 30, 2011
i am trying to keep a backup of my root on a second partition using rsync.
sda1 system
sda2 system-BAK
sdb1 /home
[code]....
View 3 Replies
View Related
Jan 18, 2016
I switched last summer from Windows (used it since Windows 95) to Debian. I'm using Debian Jessie for a couple of months now and I'm getting used a little.
There are problems here and there, but I can solved them with some reading on the web. Not really a big problem...till now
I run Debian 8.2 om my PC (PC1). Bought an older PC (PC2) that I want to use as a backup server.
I'm using PC2 only for making backups, after the backup I switch it off again.
So I installed Debian 8.2 (net-install without DE and with SSH) on PC2 and tried to configure it to let it work as my backup location. Made a public SSH key and exported it to the root account (no problem) and to the user account (sensdeb), but there was an error "Access Denied"
Gave the user (sensdeb) sudo-rights via visudo file
# User privilege specification
root ALL=(ALL:ALL) ALL
sensdeb ALL=(ALL:ALL) ALL
I installed rsync.
The problem is that Rsync only works when I use the root account.
Code: Select allrsync -r -n -t -v --progress --delete -u -l -H -s /media/Data/Mp3/Anastacia root@192.168.1.102:/test/Mp3
When I try to sync with he normal user account sensdeb
Code: Select allrsync -r -n -t -v --progress --delete -u -l -H -s /media/Data/Mp3/Anastacia sensdeb@192.168.1.102:/test/Mp3
I get error's. Access Denied
I don know how to give the user sensdeb the rights so that I can use that account for my backup tasks. Now it's possible to sync with the root account, but that should not be the way to do it, I read many times.
View 7 Replies
View Related