Server :: Making Incremental Copies/transfers With Rsync In Cygwin?
Mar 21, 2011
As an example, I have two servers, sm-i222 and fileserv. sm-i222 is a Win2k3 system running cygwin. fileserv is a linux box running RHEL 4.7. sm-i222 maps /cygdrive/c to the c: drive and /cygdrive/d maps to the d: drive(actually a single 4TB RAID). from the sm-i222 server /cygdrive/c I call a small script from the crontab. The internal IP for fileserv is 10.0.0.7. See below.
These three lines perform well in that they make a full transfer of the fileserv:/home/ directory on fileserv to the appropriate place on sm-i222 using rsync. I use rsync instead of scp because I have to traverse subdirectories and symbolic links in the /home/... filesystem on fileserv. What I'm looking to do is use rsync to do an incremental transfer/backup of only the files that have changed since the last full backup. I'll manage the times I do this manually or in crontab. A colleague says this is do-able, but not how. Rsync.org says this is do-able but not how. Cygwin says this is do-able... see rsync.org. I believe what I'm looking for is a single rsync line like I have above that only transfers the changed files on fileserv to sm-i222.
when rsync is finished the update, or in the meantime - i need to move the updated files to a different location - like date +%Y%m%d something or what ..the reason is, because of the development, i need the modified files, but all of them, not just the last one - so i have to store them daily, but i dont want to store the whole dir - just that few files which are updated does it make sense?
I am using rsync to backup dirs on my ubuntu server onto a NAS (which is mounted onto the filesystem), but the problem is that it is constantly doing full backups rather than doing incrementals and I am not really sure why. After doing a bit of expermienting with the script I noticed that if I just backed up a home dir (/home/user) the incremental backups work fine. If however I was to back up a dir like (/home/domain/user) it always does full backups.I have tried various different scripts but still the same end result. The latest script is a variation on the a script found on the samba rsync examples webpage, see below...
#!/bin/bash # rsyncbu.sh -- backup to nas using rsync # This script backups files listed in BDIR to the BSERVER. The verbose output along with the date is listed in the LOG_FILE specified # verbose output
With the --backup and --backup-dir= options on rsync, I can tell it another tree where to put files that are deleted or replaced. I'm hoping it fills out the tree with a replica of the original directory paths (at least for the files put there) or else it's a show stopper. What I'm wanting to find out applies when I'm restoring files. Assuming each time I run rsync (once a day) I make a new directory tree (named by the date) for the backup directory. For each file name/path in the tree, I would start with whatever is in the main tree (the rsync target) and work through the incremental trees going backwards until I reach the date of interest to restore to. If along the way I encounter a file in an incremental, I would replace the previous file at that path with this next one. So by the time I get back to a given date, I should have the version of the file which was present at that date. Do this for each file in the tree and it should be a full restore.
But ... and this is the hard part, it seems. What about files that did not exist at the intended restore date, but do exist (were created) on a date after the intended restore date. What I'd want for a correct restore would be for such files to be absent in the restored tree (just as they were absent in the source tree on that date). How can such a restore be done to correctly exclude these files? Wouldn't rsync have to store some kind of sentinel that indicates that on dates prior, the file did not exist. I suspect someone might suggest I just make a complete hard linked replica tree for each date, and this way absent files will clearly be absent. I can assure you this is completely impractical because I have actually done this before. I ended up with backup filesystems that have so many directories and nodes that it could take over a day, maybe even days, to just do something like "du -s" on it. I'm intending to keep daily changes for at least a couple years, if not more. So that means the 40 million plus files would be multiplied by over 700, making programs like "du -s" have to check over 28 BILLION file names (and that's assuming the number of files does not grow over the next two years).
rsync -r -v -e ssh root@nn.nn.nn.nn:/usr/local/websites/* /usr/local/websites and each time I run it it copies everything - all files. I thought rsync was only supposed to copy files that had been added or modified.
When ever I transfer large files using cp, mv, rsync or dolphin the system will slow down to the point that it's unusable. It will sometime hang completely and not accept any input until the file is transferred. I have no clue what could be causing this problem but I know it shouldn't be happening.I am using Fedora 15 (2.6.40.3-0.fc15.x86_64) with KDE 4.6. have a Phenom II 955 processor, 6 GB of system ram and the OS and swap file is on an 80 GB SSD. Copying files in the SSD doesn't cause any problem, but moving files between my other two large HDDs causes the extreme slow down. Using htop I can see that my system load jumps to between 3 and 4, but my RAM and CPU usage stays low during the transfer. Here are two commands that take about 10 mins to run and make the system unusable while it's running. It usually transferring around 2-20GB worth of data during the transfers:
cp -a /media/data.1.5/backup/Minecraft/backups/* /media/data.0.5/backup/Minecraft/backups/ rsync -a /media/data.1.5/backup/ /media/data.0.5/backup/ /media/data.1.5/ is the mount point for a 1.5 TB internal SATA drive, and /media/data.0.5/ is the mount point for a 500 GB internal SATA drive.
I received the following output from an rsync (3.0.0) command that was executed: sending incremental file list sent 77214 bytes received 484 bytes 155396.00 bytes/sec total size is 254531170 speedup is 3275.90 What does "sending incremental file list" mean?
I'm trying to set up rsync to only copy new songs from my computer to another. I'm using the "--ignore-existing" argument, but it appears to copy all files anyway. The client (source) is Windows 7 64-bit running DeltaCopy Client and the server (destination) is Synology DS410 (running rsyncd).
I installed cygwin with rsync on a Win XP Machine. My goal is to backup a folder from one hard drive to another (both on XP machine).
I run the following command from a batch file:
Works fine except the --delete flag is not working. Copies everything in source to destination, but doesn't delete some extra files that are present on the destination, but aren't on the source, which it's supposed to. I looked at the rsync man page, and I'm doing everything right... such as not using wildcard.
The same command works perfect on another computer (XP machine; source and dest both on XP machine).
It has been such time that I have been making back-up copies of my DVDs using dvdrip. Lately I tried to back-up DVD of Iron Man2 and was surprised to see so many titles under Table of Contents. 93 titles of which re all running 2 hours plus. And so I ripped the recommended one which is the longest running title. Surprisingly, the output is all screwed up. Meaning scene one is at the end along with some scenes from the middle of the film and so on. Is there any way to correctly choose which of the 93 titles is the right one please? Obviously, I can't go through the motions of doing it one by one. Or would it be more practical to just copy the whole DVD image and burn in another DVD instead of making a HD back-up?
I want to take a graphics file and make 10 copies of it to the same directory, each with 001, 002, or some such designation at the end of each file name so they have discrete files names. Is this possible using cp?
How do you get Rsync to do incremental backups rather than full backups? At the moment I have a script that will create a backup folder (if it doesnt already exist) then copy the source files into the backup directory with the command
Target is where the files will be backed up to Sources is the dir(s) to be backed up Exclude files is the list of files not to backup log file is where the output will be saved to. At the moment it only does full backups, but I would only like to do incremental, how would this be achieved? Am I missing out an option in the Rsync that is required.
I'm making my own yum repository - firstly so that all the machines I administer can be updatedvia the internal network, secondly so that I can test any updates on a spare machine before passing them on, and thirdly so that I can add my own repo for internal software.
I've created the necessary folders under my webserver, and used rsync to update them from my local CentOS mirror, following the instructions at [URL]
I notice it says to run "createrepo" on the base repository, created by copying the rpms from release DVD.
When I rsynced the updates repo, but I notice that the files in repodata are very small. In fact, having a look inside them, filelists.xml contains no file details. But if I run createrepo in the updates directory, filelists.xml gets lots of file details inside it.
I wondered if maybe the local mirror hadn't been updated properly, but checking against mirror.centos.org shows that has the same files.
how does the (real, live, CentOS) updates repo work when there is nothing in the filelists?
I have a program that is completed under Linux, it requires library tidy, PCRE and libcurl, which could be found in Cygwin too.
I could compile my linux program through Cygwin and produce an EXE file, however it do require 'cygwin.dll' installed by the users.
I am wondering if I could have someway to produce out a stand-alone EXE file that could run independently from Cygwin ? ( I don't mind to combine that cygwin.dll and the EXE together for a larger EXE file).
I'm syncing a server over the internet with rsync, but it only works for a few hours before the backup fails with a "No route to host". I can restart the job and it'll will pick up where it left off, but is there an automated way to do this, or protect against a connection failure? I have about 170GB to copy over initially, but I can only get through about 4-5GB before the connection drops--manually restarting the sync everytime it drops will make the initial backup take days...
I was wondering if there is a way to tell rsync to only apply changes (delete, overwrite,create) only if all files in the file list transferred successfully.Just to clarify, this would essentially be putting a transaction around the transfer.
I have a tiny shell script to rsync files between two servers and remove the source files.
This script works fine, when it has been initiated manually or even when the rsync command is executed on the command line.
But the same script doesn't work, when I try to automate it through crontab.
I am using 'abc' user to execute this rsync, instead of root, as root login to servers are restricted in all of our servers, by us.
As I mentioned earlier, manual execution works like charm!
When this rsync.sh is initiated through crontab, it runs the first command(chown abc.abc ...) perfectly without any issues. But the second line is not at all executed, and there is no log entry i can find at /mnt/xyz/folder/rsync.log.
A complete back up using tar takes consumes more time. so is there any way to take incremental backups using tar.And i also want to take incremental backup dump of my databases too.Any suggestions and links will be very helpful.i keep on googling for this,but could find any exact for this.
I've got a server running CentOS 5.5. I used the automated iptables config tool included in the operating system to allow traffic for vsftpd, Apache and UnrealIRCd. When I send large files to FTP, even from the local network, it works fine for a while and then completely times out... on everything. IRC disconnects, FTP can't find it and when I try to ping it I get "Reply from 10.1.10.134: Destination host unreachable" where ..134 is the host address for the Win7 box I'm pinging from. This is especially frustrating as it's a headless server, and as I can't SSH into it to reboot I'm forced to resort to the reset switch on the front, which I really don't like doing.
Edit: the timeouts are global, across all machines both on the local network and users connecting in from outside.
I'm using Ubuntu 10.04 LTS server and Postgresql 8.4. I have a .sh script that is run by cron every other hour. That works fine. The .sh script includes an rsync command that copies a postgresql dump .tar file to a remote archive location via ssh. That fails when run by cron; I think because it is (quietly) asking for the remote user's password (and not getting it). I set up the public/private ssh key arrangement. The script succeeds when run manually as the same user that the cron job uses, and does not ask for the password. I am able to ssh to the remote server from the source server (using the same username) and not get the password prompt (both directions), so why doesn't rsync work? I even put a .pgpass file in the root of that user's directory with that user's password, and the user/password are identical on both servers.
I think the problem is rsync is not able to use the ssh key correctly. I tried adding this to my script but it didn't help.
Code:
Here is the rsync command embedding in the .sh script.
I've had several HDD crashes on my personal server over the years and it's just gotten to be a real pain in the rear. Crashed again this morning. Currently, I make monthly tarball backups of the entire filesystem using my script:
Code:
#!/bin/sh # Removes the tarball from the previous execution. rm -rf /backup/data/*.tar.gz
I am trying to run two different copies of vsFTPd service in the same server, one for IPv4 and the other one for IPv6. Because as I know that you cant run one vsFTPd server for IPv4 and IPv6 in the same time.
I've seen how to do this from native linux native server to linux native server. Not a problem. My question has to do with ssh/scp exchange of key exchange between a windows cygwin server and a linux server.There seems to be no /home/root/.. to hold the key exchange files. I've tried this between a cygwin server with a /home/administrator/..subdirectory and the /root subdirectory on the linux server. Is this how I should do this?Someone else set this up between these two servers earlier but forgot to document how it was done in his notes.I don't want to break the existing systems by setting up the key generation incorrectly on the functioning pair's of servers.
I'm using postfix and the always_bcc option to backup the emails which are passed through my MTA. The problem is that (with spamassassin and clamav running as virtual-smtp agents) I get three copies bcc-ed through to the backup account. Is there something I can do to stop the bcc from being carried out on the internal filters and only work on the final send or mailbox delivery?
CentOS 5.4 Cygwin CYGWIN_NT-6.0-WOW64 1.7.5(0.225/5/3) I'm trying to setup password-less login from my CnetOS server to Win 2008 Server via ssh. I have followed the fab walk-through here and many others. When i try to connect I get this msg after a few seconds delay...
Code:
Connection closed by 10.8.0.6 When ran with ssh -vv...
I have a 64bit linux server with 5 virtual hosts on it. When someone fills out a contact form on one of the sites...I get 15-20 copies of the same email. At first I thought it was the kids clicking send multiple times because the first emails were coming from the children's ministry "Email The Cast" section. But then I started getting multiples from the adult sites too. All contact forms are set to come to me.
What's stranger is that my registration section for one of the sites uses the SAME php script (different file) to email me a notification that someone has registered but I only get 1 copy of that.