Ubuntu Security :: Rsync Automated Backups Of Entire File Structure Over Ssh?
May 16, 2011
I am in the process of writing an rsync script to run unattended backups of my entire file system to another system located on my local network using ssh and password-less rsa keys.
I will absolutely will not use password-less keys with the root account and this is the limitation preventing me from accomplishing my goal because root is required by rsync to access the / tree and copy it to another location. I decided that if I compiled the script into a binary that I didn't have a problem with the password being contained within the binary itself but from what I've read there is no way to elevate to root and then back down to user level from within the script/binary.
I can create the script as the user and use chroot to make it owned by root but retain execution permission for the user but it will still cause the ssh login to be under root and therefore require either that I am there to enter my password or the use of password-less keys under the root account which I reiterate I will NOT do. Currently the script is executed by the user on the machine containing the files to be backed up.
I'm trying to find a secure way to backup files on my Prod Server to Backup Server. It must be automated, so I will need to run a command with cron which will login to Prod Server from Backup Server and backup data. 1. Do you think it would be secure enough to do this by creating an passwordless RSA private key on Backup Server and adding it's public key to authorized_hosts file on Prod Server? I can't think of a way to Automate this without having to enter any passwords without passwordless RSA key. Is there another. more secure way? 2. Should I create a special user for backup, which will only have read access to all files in the directory that I am backing up? If so, How can I run a check that this new backup user indeed has read access to ALL files in the folder that I intent to back up? How can I ensure the backup process will not skip files due to some permission problem? 3. I'm thinking of using rsnapshot tool, which uses rsync.
There are multiple servers to be backed up. Different access rights exist in each server. There are two backup servers with plenty of disk space, one local, and one offsite. The local one feeds to the offsite one. The rsync command is being used to make a replica of backed up data. Deleted data is also being archived. There are two methods that have been considered: One is to have the individual servers run rsync which logs in to the backup server to push data. Two is to have the backup server run rsync which logs in to each individual server to pull data. Because system data is involved and meta information (like owning user) must be stored, root is required to access the data as well as to store it. That means everything runs as root both ends. So method one was quickly dismissed because each server would effectively have rights to access ALL the data on the backup server since it logs into the backup server as root. The security containment here involves different groups using different servers, and they need to be isolated from each other.
But even method two involves some risks that are a concern. This means one machine has access rights to every server. If the backup server were compromised, every machine could be compromised.What I'd like to find is some way to allow backups to be run without either machine granting root access to the other, while still running as root, or something equivalent, that allows accessing all data and storing all metadata. So I was looking at setting up an rsync daemon on each individual server (running as root so it can access what it is specified to access), and running an rsync client on the backup server (as root so it can store metadata). This opens network access issues. Any user on the network can connect to the rsync daemon. So password protection is needed. But this communication is also not encrypted, which exposes the password and the data should the network be sniffed.
So now I'm thinking about a non-root ssh login between machines. The backup server would login to a non-privileged user on each individual server and set up a secure forwarding channel to the rsync daemon. Is this the best that can be done? Is there a way to run rsync via SSL with key verification so it can all be done together? I'd like to have the rsync daemons configured to always talk SSL, and always verify the client's key against a list of authorized keys, and likewise the client verify the server's key against the known public key for that server.
I need to login as root, or at least get root privileges, in a cron triggered backup run. The straight way to do this would be the backup server making an ssh connection to the server to be backed up (this way because I want to avoid many servers being backed up in parallel and the backup server itself would be managing this diversity), via the rsync command which would be performing the backup's synchronization step.
I'm looking for alternatives to this in some form. I'd like to disallow direct root login to my ssh port (not 22One idea I have is to have the backup server initiate an ssh login as a non-root user, to either the actual source server, or to a server that can reach the source server ... and set up port forwarding. Over the forwarded port, then initiate the rsync that logs in as root via another port that allows direct root, but cannot be reached from the internet at all (because the border firewall doesn't include this port as allowed in).FYI, these logins will be using ssh keys, not passwords. I do need to keep ownership metadata for files being backed up, so this is why I am using root. Also, rsync is needed to get the incremental updates to keep bandwidth usage lower (otherwise I could just transfer a tarball each day).Anyone have any other ideas or comments, for security issues, based on experience doing things like this (backups, routine data replication, etc)?
How do you get Rsync to do incremental backups rather than full backups? At the moment I have a script that will create a backup folder (if it doesnt already exist) then copy the source files into the backup directory with the command
Target is where the files will be backed up to Sources is the dir(s) to be backed up Exclude files is the list of files not to backup log file is where the output will be saved to. At the moment it only does full backups, but I would only like to do incremental, how would this be achieved? Am I missing out an option in the Rsync that is required.
What would be the best way to have automated system backups? I'm trying to get it so my Xubuntu box automatically backs up the entire system including user settings on regular intervals, what would be the best way to do it? I have 2 hard drives with one that I do not use that I'd like to backup to.
jump into a Linux class in college with only 3 weeks left in the course. I thought I would be able to catch on, and go figure, it didn't exactly happen that way. I was given an assignment to do, and I am so far lost it isn't even funny. I need to create a directory structure, set up file security, create a step by step instruction manual on how to copy/delete said files, and create a guide to common Linux commands. How would I create these files in root and share them with the other users? and where can I find a list of common commands and their functions?
I am currently backing up my data but find that it takes way to long to do a rsync, it takes forever to just find the differences and transfer them.Out of 3 separate rsyncs the main one that is slow is my www.skins.be mirror directory which is 41GB and has 392,200 files, sorted into multiple directories. Which grows by around 100 every couple days.I think that something that would be able to track changes by inotify time on directories will speed it up since Picasa sure finds the changes fast when I open it and it is tracking over 26,200 pictures. I just don't know of a backup solution that does that.
I am using rsync to backup dirs on my ubuntu server onto a NAS (which is mounted onto the filesystem), but the problem is that it is constantly doing full backups rather than doing incrementals and I am not really sure why. After doing a bit of expermienting with the script I noticed that if I just backed up a home dir (/home/user) the incremental backups work fine. If however I was to back up a dir like (/home/domain/user) it always does full backups.I have tried various different scripts but still the same end result. The latest script is a variation on the a script found on the samba rsync examples webpage, see below...
#!/bin/bash # rsyncbu.sh -- backup to nas using rsync # This script backups files listed in BDIR to the BSERVER. The verbose output along with the date is listed in the LOG_FILE specified # verbose output
So I am using rsync (3.0.7 on MAC OSX) to backup one hard drive to a folder on another one. The is USB drive to USB drive and I have done the initial backup from one drive to a new formatted other drive with the following command:
Code: rsync -avX --progress /Volumes/Source /Volumes/Destination This all appears to be going smoothly as I type. I am going to write a script to do subsequent backups in the
I recently installed Fedora 11 64bit and I am curious about encrypting my entire file system for security purposes. I've been on Google for a while now and I keep finding info on how to encrypt a specific folder or home directories but nothing on the entire file system (or I'm missing something big here). It's hard for me to imagine that it isn't. If so, do I need to encrypt the partition my file system is on before installing it? What software should I use? There seems to be so many, it's difficult to keep them all straight.
I am backing up my debian server with rsnapshot which actually uses rsync to perform the backup. The backups are located in an external storage of size 1.4T .
[code]....
I tried to understand what this error message means and i founde that error code 12 : 12 Error in rsync protocol data stream I understand that when rsync find that a file on the target was changed , it will send only the block/blocks that contain the changes and in the destination rsync will create new file and not update the old one (new inod...) . I want to know if this error i get is due to full disk or perhaps it is some other factor
I am using rsync for incremental backups. I am backing up to a second hard drive on my computer. When I check the individual backup directories (backup.0 through backup.4) with du -hs they each show 12G; when I check the parent directory squeeze it shows 15G. Over 4 backups I have added 3G. I haven't made very much for changes to directories I'm backing up and am using hard links. I have included some info below.
Quote:
Backup script:
#!/bin/bash mount /mnt/backup cd /mnt/backup/squeeze/ rm -rf backup.7
With the --backup and --backup-dir= options on rsync, I can tell it another tree where to put files that are deleted or replaced. I'm hoping it fills out the tree with a replica of the original directory paths (at least for the files put there) or else it's a show stopper. What I'm wanting to find out applies when I'm restoring files. Assuming each time I run rsync (once a day) I make a new directory tree (named by the date) for the backup directory. For each file name/path in the tree, I would start with whatever is in the main tree (the rsync target) and work through the incremental trees going backwards until I reach the date of interest to restore to. If along the way I encounter a file in an incremental, I would replace the previous file at that path with this next one. So by the time I get back to a given date, I should have the version of the file which was present at that date. Do this for each file in the tree and it should be a full restore.
But ... and this is the hard part, it seems. What about files that did not exist at the intended restore date, but do exist (were created) on a date after the intended restore date. What I'd want for a correct restore would be for such files to be absent in the restored tree (just as they were absent in the source tree on that date). How can such a restore be done to correctly exclude these files? Wouldn't rsync have to store some kind of sentinel that indicates that on dates prior, the file did not exist. I suspect someone might suggest I just make a complete hard linked replica tree for each date, and this way absent files will clearly be absent. I can assure you this is completely impractical because I have actually done this before. I ended up with backup filesystems that have so many directories and nodes that it could take over a day, maybe even days, to just do something like "du -s" on it. I'm intending to keep daily changes for at least a couple years, if not more. So that means the 40 million plus files would be multiplied by over 700, making programs like "du -s" have to check over 28 BILLION file names (and that's assuming the number of files does not grow over the next two years).
I have several copies of a file set with different organizational structures, but the same files. i.e. On client1 files can be found in ~foobarfile1, ~foobarfile2, ~foo-avernfile3, ~foo avernfile4 On client2 files can be found in ~foo-barfile1, ~foo-barfile2, ~foo-tavernfile5, ~foo avernfile6 On client3 files can be found in ~file1, ~file3, ~file5, ~file7
I have access to one client and the server where I'd like all the files to be synced. I'm not worried about conflicts, just having a complete copy of all files[1-7]. Is there a way to cause RSYNC to remove the directory structure, so that I get something like: client1% rsync files server:backup client2% rsync files server:backup etc where at the destination all files will be checked against the destination set regardless of the source directory structure?
I've been using rsync in a bash script to make hourly copies of jpeg files that are created every few minutes. The images are contained in a number of subdirectories, with the filenames using the date and time
At the moment, my source and target directories are identical, and rsync is easy:
Quote:
rsync -av data/images/ /mnt/data/images
Note that the source directory is purged at regular intervals to stop it filling up. So the target directory has all the images, but the source doesn't.
I need to change my script so that the target directory has a different structure from the source directory, like this:
I'm trying to set up rsync backups on my ReadyNAS and I'm getting the following error: ERROR: The remote path must start with a module name not a / This error is accompanied by the following information:
I've been using dump/restore for backups, for quite some time. It's worked fine, but the process of recovering from a HD failure takes too long. What with eSATA and external drive docks, what I'd really like is to use rsync to maintain a current clone of my entire system drive. That is, start with a full disk clone, and then use rsync to keep it current.
I've seen plenty of instructions on how to do this with a directory tree, but I've seen none for doing it with a copy of the entire disk. If, for example, I copy /etc/fdisk, then the copied disk would have entries with the same UUIDs as the original disk. Which would mean that if the clone disk were to be bootable, its partitions would need the same UUIDs as the original disk. Which they would be, if the cloned disk started as a full-disk clone, I think. Am I wrong? But that means that when the clone disk was active, I'd have partitions with duplicated UUIDs. Is this going to cause problems? When I boot, will I get the correct partitions loaded?
Is there a way to force rsync to not make directories in its destination directory; ie, to simply dump all of the files from the source directory directly into the destination without copying any of the folders that the files were originally in? I tried --no-dirs, but that seems to only be for empty directories.
I am trying to create a simple bash script to rsync some folders within a directory stucture. I am using wild cards, in the rsync source directory structure, but my command always fails. I believe it is the way I am using wild cards within my for loop. Here is my command ;
Code:
for seq in `cat test.txt` ; do rsync -nvP /folder/folder/folder/folder/folder/**/$seq /folder/folder/folder/ ; done This always fails, where if I do a ls to the destination, to test the path, it always works.
I'm fully aware that Linux, and Mac OS X are a far reach from Windows when it comes to viruses. I run Xubuntu 11.04 on the netbook, and Mac OS X 10.4 on a Mac Mini. But, and I now wish I kept the email and didn't delete the account, but the following happened recently: I received an email, from myself (I was on my own contacts list) that was sent to a few others on my contacts list - an email I did not send out. There was no subject, and an unclickable link that - once copy/pasted and gone to, was "Forbidden," even when you took the /s away and went to the main site, it was "Forbidden." It was a .be website (Belgium). Being that one of the recipients was "noreply@craigslist.com" I can only assume that this email was automated.
It was only sent to a few addresses, all of which were tied to my account. I changed the password, deleted my contacts, and deleted the address (and have a new one elsewhere). I ran ClamAV on here, an antivirus on my Mac Mini (and I'm running a different one now), and thus far, absolutely nothing. So my question is - Is it, then, even reasonably possible that either of my computers are infected, and if not, how did this occur? I'm just taken aback a little, considering I don't use or have Windows on any machine that I own, and extremely rarely use Windows when elsewhere (and not at all recently).
I'm in the process of building a security team and want each individual of the team to concentrate on the GIAC certifications mentioned in the [URL] website. I was wondering if any inputs on how can I structure this team and how can I target customers?
I've been a DOS/Windows guy for 20 years, and recently became a SW test lab helper. My company uses CentOS for a lot, so I've become familiar with it, but obviously not as comfortable as I am with Windows.
Here's what I have planned:
machine: Core 2 Duo E8400, 8GB DDR2, 60GB SSD OS drive, ATI 4650 video card, other storage is flexible (I have 3 1TB drives and 4 750GB drives around that can be used in this machine.)
uses: HTPC, Network Storage, VMWare server host: SMTP, FTP server, and Web server virtual machines
I've figured out how to do much of this, but I haven't figured out how to do backups in Linux. I've been spoiled with Windows, with the built in backup system so simple to use. I find myself overwhelmed with the array of backup software, and unable to determine which to use. none of them seem to do everything I need them to do, but some come close, I think. I'm hoping someone here can help me out in figuring out which program to use and how to use it.
Here is what I need the backup software to do: 1. scheduled unattended backups, with alerts if the backups fail 2. a weekly full backup with incremental every 12 hours 3. removing the old backups when the new full backup runs, I would prefer to keep 2 weeks of backups, but that's not necessary 4. a GUI would be preferable, since my arthritic fingers don't always do as I want them to do. I typo things a lot, and the label worn off my backspace can attest to that.
I'm looking for software to encrypt my entire S.O. It's something like nobody can erase the hard drive or can't trying to hack. I'm using Ubuntu Server x64.
I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
I'm using Ubuntu for about a half year. Currently version 10.10. The next problem I have with Nautilus: He have it in ListView. If I want to rename a file then the entire file is selected and not only the first part. So the file extension is also selected. I think this is a bug, whoich can be found on the Internet, but I do not find a solution. Does anyone here have a solution?
As the title says, I've just given ubuntu full filesystem permissions. I used the following command thinking it would change the permissions of the folder I was in.
sudo chmod -R 0777 Is there anyway of reverting the permissions without doing a full reinstall?
However saying that, i'm doing a full reinstall just incase.
does anyone know the best way to encrypt an entire HD with both Fedora and Windows 7 on it already? At the very least I would want to encrypt the Linux partition, as that has the most sensitive stuff on it.
I am trying to create a script to do the following.. Login to a ftp and download a with the following naming convention xmtvMMDDYY.xml.gzon a daily basis followed by extracting that file. which I can do easy enough with a static filename. but the variable filename is throwing me off. I was planning on doing a mget with wildcard to just grab the entire directory but this found to not be as clean as I had hoped. Mainly due to the admins of the ftp keeping multiple days of the above mentioned file on the site.