Ubuntu :: Rsync Copy Results In Incomplete File
May 26, 2011
Using Ubuntu Server 10.04 LTS. I'm new to Ubuntu and testing Rsync. I successfully copied 3TB of data from a Win7 machine to an MDADM Raid5 array. All appears to be fine. Used a Win app for the copy. I then deleted a 250GB folder on the Raid5 array and recopied the data using Rsync. Rsync was executed via a Putty session on a WinXP machine. The source was an eSata attached drive (same drive used for the big 3TB copy) and the destination was the same Raid5 array. That copied just fine. I bit verified it with a Win7 app. Perfect.
I then used the following Rsync script to copy a single 26GB file from that same eSata drive back to (what I intended to be) the Raid5 array:
Code:
neil@ANTECUBSV:/mnt$ rsync -r -a -v -e ssh --delete /mnt/disk1/Test/ /mnt/Test
sending incremental file list
created directory /mnt/Test
./ C_VOL-S300-b001.spf
sent 3020267622 bytes received 34 bytes 50760800.94 bytes/sec
total size is 3019898880 speedup is 1.00
neil@ANTECUBSV:/mnt$ cd raid
Note that only about 3GB copied. No error messages were posted to the putty session. I made a mistake in the Rsync command, creating the Test folder directly in the mount folder rather than the Raid array, as I intended. That is a little strange, yes, but I would not think it would cause a partial copy? The /mnt folder is on my system drive, which had about 34GB available space before the copy, so comfortably would have had 6GB or so after.
The eSata disk is mounted as /mnt/disk1
The Raid5 array is mounted as /mnt/raid
I then recopied the file to the correct intended destination on the Raid5 array, which has about 400GB free space (plenty).
Code:
neil@ANTECUBSV:/mnt/raid$ rsync -r -a -v -e ssh --delete /mnt/disk1/Test/ /mnt/raid/Test
sending incremental file list
./
deleting 2010-07-05 Backyard Birds/Thumbs.db
deleting 2010-07-05 Backyard Birds/
C_VOL-S300-b001.spf
sent 11105775462 bytes received 34 bytes 50366328.78 bytes/sec
total size is 11104419840 speedup is 1.00
neil@ANTECUBSV:/mnt/raid$ df -h
Note that only about 11GB was copied, and this was confirmed with an ls -l command. Now I am correctly copying the file to the Raid array but it is still incomplete. I then copied the file back to the /mnt folder to see if the problem reproduces:
Code:
neil@ANTECUBSV:/mnt/raid$ rsync -r -a -v -e ssh --delete /mnt/disk1/Test/ /mnt/Test
sending incremental file list
created directory /mnt/Test
./
C_VOL-S300-b001.spf
sent 26327927554 bytes received 34 bytes 56558383.65 bytes/sec
total size is 26324713984 speedup is 1.00
neil@ANTECUBSV:/mnt/raid$ cd /mnt/test
This time I got my full 26GB file. Why I might be getting inconsistent results? This is quite troubling of course. I'd also be interested in basic a command line Linux diff app (that does file directory as well as bit level checking) if one is available.
View 9 Replies
ADVERTISEMENT
Oct 16, 2010
I have setup FTP Server on my Windows machine with Filezilla server. Now, if I try to copy files from it using Ubuntu 10.04, Lucid, it downloads incomplete files if I don't switch to binary mode.
Is there some config issue from Ubuntu client or something needs to be changed from Windows Client.
View 4 Replies
View Related
Dec 6, 2010
I'm having to rebuild my home server following a failed upgrade from 8.04 to 10.04 . All data lives in /shared on this server, the contents of which are mirrored weekly to a USB HDD which is mounted at /backup, using rsync:
sudo rsync -av --progress --delete /shared /backup
I recovered the contents of the backup drive to the rebuild server's /shared directory using the cp command with the archive flag set to preserve ownership, timestamps etc. Everything looks fine to me. However, when I do a test rsync (adding -n to the command above) then it looks like rsync wants to recopy everything, and I'm at a loss to see why. For example, below is a test on the subdirectory /shared/backgrounds. The file attributes look identical:
Code:
chris@quadra:~$ ls -la /shared/backgrounds
total 8112
drwxr-xr-x 2 chris chris 4096 2009-04-12 11:06 .
[code]....
Is there some condition that rsync can detect that I can't see? Is rsync sensitive to the way the HDD is mounted? (the USB HDD is actually mounted at /media/backup, and /backup is a soft link to this.)
View 3 Replies
View Related
May 27, 2010
I have a newly installed Dell Optiplex 755 with Ubuntu 10.04 (64-bit) and I am having serious issues with network copying. Whenever I start rsync och scp with larger amounts of data my system becomes practically unresponsive (all types of apps grey out) until those processes are completed. Cpu stays at below 10% and I have lots of free memory and such .
View 3 Replies
View Related
Jan 26, 2011
I just installed a new HD on my system with multiple HD's already. I have a drive with two versions of Ubuntu & would like to copy the complete drive to the new drive along with all the contents & partitions of the Ubuntu drive.
1 - Could I partition the new drive & just copy the contents using rsync?
2 -If I copy all the contents over could I just reinstall Grub & edit fstab & be good to go?
View 3 Replies
View Related
Jun 1, 2011
How can I use rsync to copy ONLY the my home folder (and nothing inside of it, just the folder name) to another machine. I've tried things like
Code:
rsync -av /path/to/src /path/to/dest/
or
Code:
rsync -av -f"+ */" -f"- *" /path/to/src /path/to/dest/
This last option recursively (through the -a switch) copies only folders, including all subfolders. If I try
Code:
rsync -v -f"+ */" -f"- *" /path/to/src /path/to/dest/
nothing is copied (not even my home folder.
View 9 Replies
View Related
Jan 14, 2011
This command would copy the files to the local directory,find /mnt/nas -type f -ctime 1 -iname '*.avi' -exec rsync -av {} /mnt/Mythbuntu
View 1 Replies
View Related
Apr 10, 2010
I have a WD world book edition 1TB NAS drive, and just purchased an acomdata 1tb drive and connected it to the NAS via USB. If I recall I think the WD NAS has a ext_ or some type of linux filesystem on it, and the acomdata has a ntfs filesystem on it.
What I want to do is copy over certain directory trees of the NAS to the USB attached drive. I usually use MS synctoy to sync folders from my windows pc to the NAS drive, and MS richcopy to make the initial transfer from PC to NAS. For this operation though, since it is taking place entirely on the NAS and its connected drive, I thought that rsync would be the best option, and it is available on my NAS drive.
Last night I entered in rsync -avr /movies/* /usb1-1share1/ to copy the entire "movies" dir to the drive, which shows up as usb1-1share1 on the NAS drive. It copied most of the directory tree ok, but a lot of the folders were empty, so this morning I tried rsync -Carv --ignore-existing /movies/* /usb1-1share1/ to try and get all the files missed, without recopying the 24GB that did make it across. This also managed to copy a few more GB over, but not everything.
I am running the command from an ssh session on the NAS using putty on my PC, in as user "admin" which should have all rights over these folders. There is a bunch of errors in the command window like this: rsync: failed to set times on "/shares/usb1-1share1/movies/classics/fulldvd/First Blood DVD/.VTS_01_2.VOB.RxdjWZ": Operation not permitted (1)
I want to restart another session and get the files it missed, but I want to find out what I am doing wrong first. Should I be doing this as root user? am I missing some switches or just plain doing it all wrong?
View 3 Replies
View Related
Feb 9, 2010
I have recently purchased an external hard drive in order to backup my home partition. In my PC I have a "1.5T" drive with several partitions on it, containing OSes and the home partition. The home partition is 1.3T according to df, the external drive contains one partition that spans the entire disk,df reports it as 1.4T in size. Both partitions are ext3. When I use rsync to copy files from the home partition to the external partition, the external disk becomes full, despite the destination - supposedly - being larger than the source. I don't understand why copying files from one partition to a slightly bigger partition should need more space than on the source partition. Does anyone know what is happening ?
Details : I created the partition on the external drive with gparted; gparted reported it the already have several gigabytes in used space immediately after the partitions creation - I thought at the time that this must be normal. The home partition contains many files of all sorts, including lots of big audio and video files. If you are wondering, for all my important files this external disk is only secondary backup, as they are also backed up to the "internet".
These are the mount points :
/mnt/tmp/ : home partition, /dev/sdb6
/mnt/external/ : external partition, /dev/sdc1
I used rsync to copy the files, I know there are more efficient ways to do this, but I wanted to use the same command that I will subsequently run to sync the backup.
rsync -av --progress --stats --recursive --perms --links --delete /mnt/tmp/ /mnt/external/
Next I tried adding the --sparse switch, as I was wondering if the problem may come form sparse files. I don't know however if rsync would go back and shrink the sparse file by just adding the switch and executing the command. I also added --one-file-system, for good measure. Here is what I ran next :
rsync -av --progress --stats --sparse --one-file-system --recursive --perms --links --delete /mnt/tmp/ /mnt/external/
I tried an fsck on the home partition :
fsck -f /dev/sdb6
This is the output from the last rsync :
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32)
rsync: write failed on "abcd.avi": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.6]
[code]....
Looking at the destination after a partial copy seems to indicate that the problem is not symbolic links being "expanded". I have not checked the source filesystem for sparse files, nor the destination to see if these files could be larger there, as this does not seem trivial.
Here is some additional info :
$ df /mnt/tmp/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb6 1415342836 1414173740 369096 100% /mnt/tmp
[code]....
View 2 Replies
View Related
Aug 20, 2010
On my Ubuntu box, I have a mounted windows share connected via gvfs called graphics. I want to backup everything on a nightly basis from graphics to backupserver/graphics . If I use rsync, it will not copy files that have parent directories with funky characters in them (but the directories themselves will be copied!). Everything else gets rysnced just fine.
graphics/test/macdir/picture.psd
...when rsynced over to ...
backupserver/graphics/
gives the error:
rsync: mkstemp "/home/administrator/.gvfs/drobo on x.x.x.x/linux_backups/graphics/test/ macdir/.picture.psd" failed: Operation not supported (95)
The directory macdir gets created but there is nothing in there. This happens for all files underneath dirs with funky names. cp -Rf works perfectly! Directory and child files all get copied over no matter how strange the characters get in the directory names.
View 7 Replies
View Related
Jun 15, 2011
I am trying to create a simple bash script to rsync some folders within a directory stucture. I am using wild cards, in the rsync source directory structure, but my command always fails. I believe it is the way I am using wild cards within my for loop. Here is my command ;
Code:
for seq in `cat test.txt` ; do rsync -nvP /folder/folder/folder/folder/folder/**/$seq /folder/folder/folder/ ; done This always fails, where if I do a ls to the destination, to test the path, it always works.
View 5 Replies
View Related
May 14, 2011
when I installed 13.37 I created a local copy of the entire stable tree (source/ and all the rest) just to have all that stuff around to browse offline.
Now, to instruct myself, I'm trying to use rsync to keep this stuff up to date. But I seem either to have misread the rsync man page or ... well, I don't know. I am issuing the following command and getting the results seen below:
Code:
View 3 Replies
View Related
Apr 20, 2010
I've got quite a decent rsync script setup, however I'd like to invoke it whenever there's change to a file. My initial idea was to use find, however this has two major flaws - the first being my particular unix veriant cant understand -print0 which means this doesn't work, the second is that I'm not 100% sure how to put variables into quotation marks so ls can understand the target:
Code:
for i in `find /shares/ -mtime -1 -print`; do ls -ltr $i;done
View 14 Replies
View Related
Mar 1, 2011
How to copy a Read-Only file in Linux and make the copy writable with a single cp command in Linux (Ubuntu 10.04)? The --no-preserve and --preserve seemed to be good candidates, except that they should "and" the mode flags, while what I am looking for is something that will "or" them (add +w mode).
More details: I have to import a repository from GIT to Perforce. I want that all Perforce depot files are Read-Only (that is how Perforce was designed), while all other files that were derived/copied from depot files are writable. Currently if a Makefile tries to copy a Read-Only file then the derived file will also be Read-only. This leads to build-errors when cp tries to overwrite Read-Only file second time. Of course the --force is a workaround here but then the derived file is also Read-Only. Also I do not want to mess with "chmod" after each "cp" command - I will do that only as the last resort.
View 1 Replies
View Related
Mar 23, 2011
I am running memtest due to memory issues, I am wondering if their is a log file that can be saved to the hhd with the memtest results, I am running memtest from the grub menu.I am running ubuntu 10.10
View 1 Replies
View Related
Jun 14, 2011
I want to run rsync on server A to copy all files from Server B when they are newer than 7 days.(find . -mtime -7) I don't want to delete the files on Server B.
View 2 Replies
View Related
Mar 16, 2010
I like the "Ubuntu" sudo philosophy and wanted to setup this sudo the same way on my Debian system. I was happy when I found that I just have to do the following:
-create a group 'admin'
-adduser christian admin
-visudo
-add the line: %admin ALL=(ALL) ALL
Then I tried sudo rm -rf / to check if sudo works. All worked fine. No, seriously, I can move around files that belong to the root and such, so sudo somehow works. But when creating new files with sudo, like e.g.
tar xzf myZippedTarball
these files belong to user 41034 and to the group users instead of root root:
drwxr-xr-x 7 41034 users 4096 2009-11-01 01:07 libsvm-2.9
Certainly, this is not the way I want to have. The user 41034 doesn't even show up in /etc/passwd...
View 2 Replies
View Related
Feb 27, 2011
I am writing a script which would just print following kind of result into a text file (result.txt)
Code:
XYZ test Results
ID: <unique-id> Date: <date>
-------------------------------------------------
| Task | Result | Time |
-------------------------------------------------
| <task1> | <Result1> | <time1> |
| | | |
-------------------------------------------------
AD No: <adn> Generated Date: <tdate>
Above all the values of <something> are dynamic or predefined.
View 3 Replies
View Related
Jul 28, 2010
How to output to a text file the compound command:
Code:
find -type f -print0 | xargs -0 grep -l "desired text"
I have not been able to find the answer.
[code]...
View 5 Replies
View Related
Aug 12, 2010
I'm trying to do a
find /photos/* -type f -mtime +365
to find all my pictures that are over a year old, but I keep getting argument list too long. How can I view what all the results are, even if it just dumps it to a file that I have to open?
View 12 Replies
View Related
Nov 2, 2010
I am trying to store the results of my code to a separate text file.But the problem is, as my results comes from a loop, my text file shows only the last result, not all of them.Like if the loop runs 5 time the text file shows the result for the 5th step.But i need to store all of them (1 to 5).Can I use awk to print the output field and store to another file and creat a new line so that the next output field goes to a new line?(just an idea, dont know).
#!/bin/basth
for (( i=1; i<=5; i++))
do
./file.exe > output.txt
done
View 2 Replies
View Related
Jul 1, 2011
I have successfully backed up my files using a script to a remote server with a log file output.However the log file is appended each time.I wish to have a different log file each time with date and time and have yet to figure that part out.
View 2 Replies
View Related
Nov 17, 2010
Thought I'd post it here because it's more server related than desktop... I have a script that does:
[Code]....
This is used to sync my local development snapshot with the live web server. There has to be a more compact way of doing this? Can I combine some of the rsyncs? Can I make the rsync set or keep the user and group affiliations? Can I exclude .* yet include .htaccess?
View 6 Replies
View Related
Jan 7, 2011
When I run rsync --recursive --times --perms --links --delete --exclude-from='Documents/exclude.txt' ./ /media/myusb/
where Documents/exclude.txt is
- /Downloads/
- /Desktop/books/
the files in those directories are still copied onto my USB.
And...
I used fetchmail to download all my gmail emails. When I run rsync -ar --exclude-from='/home/xtheunknown0/Documents/exclude.txt' ./ /media/myusb/ I get the first image at url.
View 9 Replies
View Related
Apr 6, 2010
I'm timing how long it takes to run a command foo. I'm looking to append the results from the time command to a file, and discard the results from the foo command. I tried the following, but it didn't do what I want:
$ time ./foo > /dev/null >> output_from_time_command.txt
View 1 Replies
View Related
Mar 24, 2010
I'm always getting this error when I run the rsync command:
Code:
when I try to navigate the file:
Code:
it gives
Code:
View 2 Replies
View Related
Mar 25, 2011
I have a backup sh file that I have been using for a long time. It has always worked. 2 Days back I switched to a different pc and now suddenly the script don't work.If I run it manually in the terminal it works. But when it execute with cron it doesn't copy any files to the backup destination. It starts but doesn't copy anything.Can someone help me as to why it works manually but not with cron ?
View 9 Replies
View Related
May 16, 2011
I have an Ubuntu machine running NFS4 server and a plugapps (arch linux) machine connecting as the client. The plugbox is running an rsync job to backup the home directory from Ubuntu to a local USB HDD.
All of the files in the destination have owner nobody and group nobody.
Ubuntu /etc/exports:
Code:
/home 192.168.2.1/24(rw,sync,no_subtree_check,no_root_squash)
plugbox /etc/fstab
[Code].....
how I can mantain the file owners. I have the UID's and passwords sync'd between the two machines for both root and the user who's home dir is being backed up.
View 9 Replies
View Related
Mar 10, 2010
i am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
rsync -avz --stats bkup_1.tar.gz root@10.1.1.22:/var/opt/bkup
[root@sa1 logs_os_backup]# less remote.log
Wed Mar 10 09:12:01 AST 2010
building file list ... done
bkup_1.tar.gz
deflate on token returned 0 (87164 bytes left)
rsync error: error in rsync protocol data stream (code 12) at token.c(274)
building file list ... done
code....
View 1 Replies
View Related
Oct 21, 2010
I have 2 different mounts. One points to a local windows share(NTFS ->Samba) and the other one points to a PPTP VPN connection sharing(I belive that is NTFS too). I use "cifs" scheme in my fstab to mount these. And I use my Debian box to copy between these 2 mounts. I have started using Rsync for that purpose, I think that it works fine for now. My main problem is that it looks like Rsync cannot figure out if the files are same or not in source and target folders when I use these mounts. Most of the time Rsync copies the same files and folders over and over again even though those files and folders are on the target.
I am wondering if there is a way to make this scheme work? Being on a Vpn connection(slow) a Windows box, Rsync could have save a lot of my time if it could have recognized the files and folders that are same on both ends
View 4 Replies
View Related