To copy from production to standby over the internet I use a cron job doing rsync -avze 'ssh -p 8022' --exclude-from= ....
My question is: should the cron job run on the production or the standby system. Root access to the remote system is given by a pass phrase-less ssh key. Currently I run rsync on the production system. I guess that it is more secure because the standby needs no ssh login to production. Running rsync on the standby would use less resources on production. I am concerned that in this case there would be pass phrase-less access from standby to production.
I have a small network at my office (3 workstations, 1 ubuntu desktop that I'm using as a file server). I'm using a WRT54G2 router for networking and internet connectivity. Here's what I'm trying to accomplish: I want to be able to access my little file server from home, across town. I think ssh might be the best way to go now. What I don't know: How do I set up the ssh server on my machine/network without compromising my network security and the security of my server? Do I just set up port/ip forwarding on my router, install openssh, and that's it?
I'd like to run a Tor relay, but am trying to understand the security implications. For some time I've run my torrent client in a VirtualBox virtual machine, which is run as a very non-prived user, bridges directly to The Internets, and writes to one directory on the host. My belief is this is about as secure as it can be, but am open to suggestion.If I run a relay in the VM it wouldn't be associated with my use of Tor as a client, which is fine since there is no technical need for them to be connected and it's desirable for security.I read that chroot jails can be broken, particularly when run as root, so I don't really trust that. Also studied a vserver, but it must share the network setup which doesn't strike me as isolated enough.
This one being Ubuntu 9.10 (yes, I know I really should upgrade). I keep a number of confidential files in a TrueCrypt container which is a standalone file in my Documents folder. I'd like to delete some of these, but I want to do it as securely as I can, but I believe if I simply hit 'Delete' with the file selected it'll move the file to the Deleted Items folder. This, I assume, means that the file is taken out of the encrypted volume and stored unencrypted in the Deleted folder.
I've been reading a little about the Shred command, and there seems to be some question about whether it works effectively with a journalled file system; and since I have no idea whether I'm using a journalled file system, or how to find out, I'm treating Shred and other over-writing secure deletion tools as ineffective for now.
With this in mind, can anyone advise me how I can protect the file stored in the TrueCrypt volume, and delete it in place, without taking it out of the encrypted area? And, further to that, can anyone tell me whether in fact the file is actually secured while it's in the encrypted volume? For all I know, just opening the volume may result in copies being made somewhere (apart from RAM).
My company needs to send sensitive data across to another company, 800gb of .dpx. The way I have thought of is: E-Sata/1TB WD black. True-encrypted/ hw accelerated aes (3x machines being built with 2600k) Sha1sum on each file.
The main goal is to make sure that 1. The files that were transferred off the server onto the drive, are exactly the same. 2. Secure. 3. Fast.
I saw in a magazine reference to using rsync to have identical copies of folders. This looks like something I could find useful as I have a large number of items in need of safe backup.
I have the folders on an old system on a home network and would like to copy these over to a USB Hard Drive.
Currently the folders reside on SFTP xxx.xxx.xxx.xxx and I wish to sync them to a USB port on my laptop.
I have a samba share to a windows 7 computer I do not know if I will be able to use backintime or not so I want to know how to have rsync do my backup.I read the man but I'm not sure if I understand the it.on same computer different hard drive to run every hour in a script. Leanne is windows 7 share and backup is the other hard drive in the computer rsync -arvRzEP /media/leanne /media/backup.
I switched last summer from Windows (used it since Windows 95) to Debian. I'm using Debian Jessie for a couple of months now and I'm getting used a little.
There are problems here and there, but I can solved them with some reading on the web. Not really a big problem...till now
I run Debian 8.2 om my PC (PC1). Bought an older PC (PC2) that I want to use as a backup server.
I'm using PC2 only for making backups, after the backup I switch it off again.
So I installed Debian 8.2 (net-install without DE and with SSH) on PC2 and tried to configure it to let it work as my backup location. Made a public SSH key and exported it to the root account (no problem) and to the user account (sensdeb), but there was an error "Access Denied"
Gave the user (sensdeb) sudo-rights via visudo file
# User privilege specification root ALL=(ALL:ALL) ALL sensdeb ALL=(ALL:ALL) ALL
I installed rsync.
The problem is that Rsync only works when I use the root account.
I don know how to give the user sensdeb the rights so that I can use that account for my backup tasks. Now it's possible to sync with the root account, but that should not be the way to do it, I read many times.
I want to save a backup of my data on a remote server, but never want the backup server to see the data unencrypted. Editing a single file and backing up should not result in everything being encrypted and sent again. The remote server should preferably not even know the directory structure (and especially not the directory names).
I'm trying to learn how rsync works to backup my system. I tried: Code: rsync -azvv /home /media/Elements I get a folder called home on my external hard drive but when I use ls -l to see the permissions they are all wrong. On my /home folder the permissions for /nathan are drwxr-xr-x 48 nathan nathan The permissions on the backup /nathan folder are drwx------ 1 nathan nathan
I also tried using the long version of -a which is -rlptgoD and that didn't work either. What do the 48 and 1 mean when I used ls -l? When I look in the /nathan folder the permissions are all screwed up too. A lot of the files are backed up as executable and the permissions are all screwed up. I also ran it with sudo, and that didn't work either. The permissions were still screwed up and ownership is messed up too.
This should be a quick one. I'm trying to backup a single directory and it's subdirectories on my Lucid Server to a freenas box across my network. This is what I'm using to do that rsync -r -a -v -z * --delete freenas: DSIBackups..It almost works perfectly except for one problem. When a file is deleted at the source, this command doesn't seem to delete it on the receiving end. I assumed that the --delete would do that but aparently not.
when rsync is finished the update, or in the meantime - i need to move the updated files to a different location - like date +%Y%m%d something or what ..the reason is, because of the development, i need the modified files, but all of them, not just the last one - so i have to store them daily, but i dont want to store the whole dir - just that few files which are updated does it make sense?
I've been trying to make a three stage backup with stage 0 being a full monthly back up, stage 1 being a weekly backup, and stage 2 being a daily backup. I've been trying very hard to use rsync for this but sorting files by date is proving to be problematic. Sometimes it seems to work from the command line directly, but the same command causes errors and warnings from a script while entirely failing to sort the correct files.
The common example I see for this involves commands like this:
Code: rsync -Rav `find /home/ -ctime -7 -print` /path/to/home_backup The problem seems to be that since the user directories in /home contain files that have been altered within the time frame specified the whole directory is matched first which means that the whole directory is recursively archived as opposed to just the changed files.
I've also seen examples using the --files-from tag using the same find parameters and this one seems to ALMOST work but gives me strange warnings and fails to run at all when launched from a script.
Many of the things I've googled about using rsync to backup stuff by the date modified involves a rather snarky "You're missing the point of rsync!" to which I respond by yelling at my computer monitor followed by "JUST TELL ME WHAT I NEED TO KNOW!" I understand that rsync is meant to take care of incremental backups on it's own, that's why I want to use it specifically for a traditional 3 stage backup scheme.
I'm doing an rsync backup to an external drive in order to take a shot at setting up partition encryption. My rsync command is, as root: Code: rsync -av / /external1/backup.Once I've finished my cryptsetup and done a fresh Linux install, what command should I use to properly restore my backup (without messing up the encryption setup)?
I'd like to backup my whole system to a 2nd disk using rsync (other tools not possible).Which paths should I exclude from the packup?I was thinking about /proc, /dev, the lost+found directories...What other paths am I forgetting?
I'm syncing a server over the internet with rsync, but it only works for a few hours before the backup fails with a "No route to host". I can restart the job and it'll will pick up where it left off, but is there an automated way to do this, or protect against a connection failure? I have about 170GB to copy over initially, but I can only get through about 4-5GB before the connection drops--manually restarting the sync everytime it drops will make the initial backup take days...
I'm hoping somebody can find something here that I haven't. I'm trying to use rsync to backup home directories to a nas. First, I NFS mounted the nas and ran an rsync and everything worked out fine. the transfer completed after a few hours and everyting was transferred (lots of stuff!). I then decided that I don't want to leave the nas mounted all the time and I didn't want to automate mounting and unmounting of the nas as I didn't think I could produce a script that would work reliably enough. So I decided to start an rsync daemon on the nas and upgrade via that. I run the following command (results are included. the ^C is me killing it after it hangs).
well, i know ther are issues when using rsync to copy files to ntfs partition like file permission blah blah. the thing is, i need to backup my music files periodically onto a ntfs partition from ext4. i really dont care about file permissions or any other stuff. when i use rsync, it should update the mp3 files on my ntfs (external) disc with the new ones.can i give a go with this operation? i have lot more important files on the external disc and i dont want this rsync corrupt or delete those files coz they are highly important files.
I have a Linux host acting as an ISCSI server for a Windows box. I want to keep an off site backup, so I figure rsync will keep the ISCSI server synced with an offsite Linux host. I understand that Rsync does block level incremental transfer to conserve bandwidth ok, awesome.The trick is, that I also want an archival copy kept. Say I want to go back to a revision of a file from 10 days ago, I need to be able to do that.
I was planning on using Backup Exec, since we currently have a licensed copy. Throw the archives from Backup Exec onto the ISCSI server as well, and have it keep a rotating 30 day backup, or something like that. The issue I see here is that this will be creating a deleting files as it does its daily backup rotation. I'm guessing RSYNC will see these as new files, and likely retransmit everything on a daily basis. The question then becomes, is this assumption correct, or will it still know to do a block level incremental transfer even when file names and such are changing?
Our backup script was working fine (ssh to the server, back up /home to a second hard drive on my computer). Then right after an ubuntu update, it quit working. I investigated and found that "something" had changed the label on the backup hdd to what looked like gibberish to me. But the script identified the backup hdd by its uuid, which didn't change. Yet, here is the error I get when the backup fails: receiving file list ... done [took about 5 seconds] rsync: mkdir "/media/14D9-3B1F/server-backup" failed: No such file or directory (2) rsync error: error in file IO (code 11) at main.c(594) [receiver=3.0.6]
Note that the backup hdd IS mounted, uuid is correct, and the folder 'server-backup' DOES exist. Does anyone have a clue for me? I'm moderately experienced in Linux and ubuntu. Our server runs centos 5. And as stated, the backup ran fine for several weeks. I think there was a new linux kernel on that update, but at this point a while later I don't know which one. Current kernel is .2.6.31-22-generic.
I support a small business which has an Ubuntu server running as a file server. The server is running Ubuntu 10.4. There is one hard drive which is mounted as /media/hdd. Each night this is backed up to an external USB hard drive mounted as /media/backup. The backup is carried out using the command:
Code: rsync -av /media/hdd/ /media/backup/
Is there a way to encrypt this back-up so that if the USB hard drive is plugged into another machine it cannot be read?
I'm going to make a nightly backup copy from one server to another, using rsync. If I have a sufficiently large file, say 4+ GB or so, I'm not interested in copying the whole file if only a small change has been made. Can rsync detect small changes on block level and backup only those if needed?
rsync: link_stat "/av" failed: No such file or directory (2) skipping directory home rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1060) [sender=3.0.7]
So I am using rsync (3.0.7 on MAC OSX) to backup one hard drive to a folder on another one. The is USB drive to USB drive and I have done the initial backup from one drive to a new formatted other drive with the following command:
Code: rsync -avX --progress /Volumes/Source /Volumes/Destination This all appears to be going smoothly as I type. I am going to write a script to do subsequent backups in the