Programming :: Shell Script To Copy Newly Changed Files With Rsync?
Apr 20, 2010
I've got quite a decent rsync script setup, however I'd like to invoke it whenever there's change to a file. My initial idea was to use find, however this has two major flaws - the first being my particular unix veriant cant understand -print0 which means this doesn't work, the second is that I'm not 100% sure how to put variables into quotation marks so ls can understand the target:
for i in `find /shares/ -mtime -1 -print`; do ls -ltr $i;done
During my backups I'm finding that rsync is copying all files, instead of just what's changed.
I'm rsyncing between 2 USB external hard drives. One hard drive is FAT32 and one is NTFS. I've examined some of the files and believe that the difference is that there's a 1-second modtime difference developing in some of the files somehow.
Here's an example. These duplicity files were synced from /media/BACKUPHD (the NTFS drive) to /media/VIDEOHD (the FAT32 drive) only a few hours ago this morning. They have not been touched or changed since then, but that 1-second difference in their time stamps has appeared:
Code: tim@localhost:~> stat /media/BACKUPHD/backups/duplicity/duplicity-full.20110107T145955Z.vol10.difftar.gpg File: `/media/BACKUPHD/backups/duplicity/duplicity-full.20110107T145955Z.vol10.difftar.gpg'
I'm using rsync to create a mirror of the data files on our main server every day. I've looked at the man page, and can't see it; can I get a listing of the files that have been changed on or added to the mirror when it's completed? Can it just log what it's doing to a file?
how rsync will handle disk images. Will rsync copy only the changed blocks of a vhd or a lun? This is what I've been told, but wouldn't this require overlaying a filesystem on the vhd? How would rsync handle copying a 500GB lun?
I have recently purchased an external hard drive in order to backup my home partition. In my PC I have a "1.5T" drive with several partitions on it, containing OSes and the home partition. The home partition is 1.3T according to df, the external drive contains one partition that spans the entire disk,df reports it as 1.4T in size. Both partitions are ext3. When I use rsync to copy files from the home partition to the external partition, the external disk becomes full, despite the destination - supposedly - being larger than the source. I don't understand why copying files from one partition to a slightly bigger partition should need more space than on the source partition. Does anyone know what is happening ?
Details : I created the partition on the external drive with gparted; gparted reported it the already have several gigabytes in used space immediately after the partitions creation - I thought at the time that this must be normal. The home partition contains many files of all sorts, including lots of big audio and video files. If you are wondering, for all my important files this external disk is only secondary backup, as they are also backed up to the "internet".
These are the mount points :
/mnt/tmp/ : home partition, /dev/sdb6 /mnt/external/ : external partition, /dev/sdc1
I used rsync to copy the files, I know there are more efficient ways to do this, but I wanted to use the same command that I will subsequently run to sync the backup.
Next I tried adding the --sparse switch, as I was wondering if the problem may come form sparse files. I don't know however if rsync would go back and shrink the sparse file by just adding the switch and executing the command. I also added --one-file-system, for good measure. Here is what I ran next :
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "abcd.avi": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.6]
Looking at the destination after a partial copy seems to indicate that the problem is not symbolic links being "expanded". I have not checked the source filesystem for sparse files, nor the destination to see if these files could be larger there, as this does not seem trivial.
Here is some additional info :
$ df /mnt/tmp/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb6 1415342836 1414173740 369096 100% /mnt/tmp
On my Ubuntu box, I have a mounted windows share connected via gvfs called graphics. I want to backup everything on a nightly basis from graphics to backupserver/graphics . If I use rsync, it will not copy files that have parent directories with funky characters in them (but the directories themselves will be copied!). Everything else gets rysnced just fine.
graphics/test/macdir/picture.psd ...when rsynced over to ... backupserver/graphics/
gives the error:
rsync: mkstemp "/home/administrator/.gvfs/drobo on x.x.x.x/linux_backups/graphics/test/ macdir/.picture.psd" failed: Operation not supported (95)
The directory macdir gets created but there is nothing in there. This happens for all files underneath dirs with funky names. cp -Rf works perfectly! Directory and child files all get copied over no matter how strange the characters get in the directory names.
I am trying to create a simple bash script to rsync some folders within a directory stucture. I am using wild cards, in the rsync source directory structure, but my command always fails. I believe it is the way I am using wild cards within my for loop. Here is my command ;
for seq in `cat test.txt` ; do rsync -nvP /folder/folder/folder/folder/folder/**/$seq /folder/folder/folder/ ; done This always fails, where if I do a ls to the destination, to test the path, it always works.
Using C++, I want to process sub-folders on my home folder sequentially each with a special naming format and containing some binary files in it:
Code: 1/ 2/ 3/ 4/ 5/ 6/ ...
Give above folders, I will process files in 1/ at first, 2/ at second, 3/ at third, and so on.
For some n/ folder, if I realize that n/ actually does not exist in local file system, I do not want to wait for it. Hence I will keep processing (n+1)/ folder, and so on.
However, when processing some (n+m)/ folder, previously not processed n/ folder may have been created on local file system. In this case, I do not want to miss processing it, but somehow detect its creation and process it. After processing n/ folder, I want to continue from (n+m+1)/.
Well my drive has errors where i can't mount it using a GUI but it will mount and let me see my files using shell or w/e it is that i'm using while in recovery mode. I have manage to change my directory to the one that has ALL of the files that i want and need to copy. I was wondering whats the easest way to copy ALL of the files which are like 20 GB onto my 500 GB external HDD? and How will i know everything is done copying?
I am trying to install RSYNC to my newly installed Debian (Booting from SD). I ran the command apt-get install rsync and I got the following error WARNING: This version of glibc requires that you be running kernel version 2.6.12 or later. Earlier kernels contained bugs that may render the system unusable if a modern version of glibc is installed. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny1_arm.deb(--unpack): subprocess pre-installation script returned error exit status 1 INIT: version 2.86 reloading Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny1_arm.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I ran uname and I found out the kernal version of my debian is 2.4.26. Then I ran apt-get install -t etch linux-image-2.6, and I get the error
Reading Package Lists... Done Building Dependency Tree... Done Package linux-image-2.6 is a virtual package provided by: linux-image-2.6.26-2-orion5x 2.6.26-21
I am a novice to the shell script. In my system from db server the log files are enerating with the name log1.txt,log2.txt..... It is capable of keeping 10 files at a time in dir called /db/sis/log1.txt. I want to copy the log1.txt to another directory when ever it generating by attaching the time stamp to it for the back-up purpose. this files will be there for a period of 24 hours. after that the back-up dir should be cleared and it start copying again the fresh file from the same dir.
want to write a shell script to copy database archivelog files sequential from one directory to another directory within a server. I am hereby enclosing the sample archivelog file name. Archive log filename : log_0000118432_1.arc (Here number 0000118432 will be sequentially incremented by 1 for next filename). Here the catch is all Archivelogs must be copied to destination directory. Previously copied files should not be copied.
I need to create a script that will compare the differences between two folders and then to copy only the updated and new files only to another directory. I know I need to use rsync here, I can write scripts so really it not how to create a script it is how do I accomplish the transfer of only new or changes files between two folders to a new file. Do I need to link these two folders first and then use the "--compare-dest" switch.
I'm going to make a nightly backup copy from one server to another, using rsync. If I have a sufficiently large file, say 4+ GB or so, I'm not interested in copying the whole file if only a small change has been made. Can rsync detect small changes on block level and backup only those if needed?
I just maked an ext4 partition by the help of gparted. Ubuntu is my only OS no dual boot. Using Ubuntu Maverick. The problem is partition must be open as root to do any work else it wont even allow me to open file,create folder,cut copy paste or anything.
I'm configuring an rsync between 2 machines, A_Server --> B_Server, using the following script:
Code: #!/bin/bash # # Script de backup a travïż½s de Rsync desde RMP-1 hasta RMP-2. #
The Rsyn is working OK. What i need is to change one of the lines of the /tmp/prueba.txt before sending it to the remote machine (obviously not changing the file in the local machine), i mean, send prueba.txt to the remote machine deleting one line and adding another one...how can i do this?
I have been searching for a solution to the following problem:
When my distro of choice updates Firefox web browser, the directory name is '/usr/lib/firefox-<version>'. The problem here is that the directory name is dynamic by nature and doesn't allow a simple static solution, e.g. 'cp -rf /usr/local/files/bookmarks.html /usr/lib/firefox/defaults/profile'.
The same quandary applies when adding extensions, changing prefs etc. I have looked at the following commands:- find, sed, xargs, grep, awk, fprint. Unfortunately my grasp of syntax and programming is very simple at best.
What I want to do is to create a script that will interpret the following string and save into variables part of its name
m02_+1+7_London_0000$01.cfg as ------X-Y--City--------- X=1 Y=7 City=London
then I want to copy the files that go all the files with the same City and X and Y to the same subfolder City/MX.Y I will need some help start doing that. And I think the first would be to get part of the filenames strings into variables.
I am trying to make small kernel. I have written many programs and produce many .bin and .o files but what I want that to load every file from a specific location in specific sectors but don't know how to do that in linux , in dos same can be done by debug command.If It is not possible to achieve the specific location criterion please tell me how can I just copy many files serially to a floppy image.I have another question that if files are copied in floppy. How could I know in which sector the file has been loaded in floppy so that I can retrieve them by BIOS interrupt INT13.
I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.