Ubuntu :: 2x1TB USB Drives For Rsync?
Jul 29, 2011
I have a Popcorn Hour that I'm going to put a 2TB internal HD into. I want to be able to synchronize the data from the Popcorn Hour to my desktop. My desktop has 2x1TB USB drives. Is there a way that I can get the 2x1TB drives to be seen as 1x2TB drive. My desktop is currently running mythbuntu.
View 1 Replies
ADVERTISEMENT
Jan 27, 2010
I recently built a small Intel Atom 330 based server for my home. I'm using the Vortexbox Fedora-based OS to run the server (primarily used as a media server). So far everything is working great. In addition to my media server, I've got a DLink DNS-321 NAS. I would like to setup a scheduled, incremental backup of my main server to the DLink NAS. I understand rsync is an excellent option, and am willing to undertake the task of setting it up on my server, but I am uncertain how to make it all happen with the DLink NAS. The NAS is very barebones, and I don't know if I can even install rsync on it. I don't even know if I can get to any kind of command line on the NAS box. 1. Can I mount the NAS drive on my main Linux server and then just run rsync on the server?
View 7 Replies
View Related
Nov 17, 2010
Thought I'd post it here because it's more server related than desktop... I have a script that does:
[Code]....
This is used to sync my local development snapshot with the live web server. There has to be a more compact way of doing this? Can I combine some of the rsyncs? Can I make the rsync set or keep the user and group affiliations? Can I exclude .* yet include .htaccess?
View 6 Replies
View Related
Jan 7, 2011
When I run rsync --recursive --times --perms --links --delete --exclude-from='Documents/exclude.txt' ./ /media/myusb/
where Documents/exclude.txt is
- /Downloads/
- /Desktop/books/
the files in those directories are still copied onto my USB.
And...
I used fetchmail to download all my gmail emails. When I run rsync -ar --exclude-from='/home/xtheunknown0/Documents/exclude.txt' ./ /media/myusb/ I get the first image at url.
View 9 Replies
View Related
Apr 12, 2011
I have a tiny shell script to rsync files between two servers and remove the source files.
This script works fine, when it has been initiated manually or even when the rsync command is executed on the command line.
But the same script doesn't work, when I try to automate it through crontab.
I am using 'abc' user to execute this rsync, instead of root, as root login to servers are restricted in all of our servers, by us.
As I mentioned earlier, manual execution works like charm!
When this rsync.sh is initiated through crontab, it runs the first command(chown abc.abc ...) perfectly without any issues. But the second line is not at all executed, and there is no log entry i can find at /mnt/xyz/folder/rsync.log.
View 6 Replies
View Related
Sep 18, 2009
I just tried to sync files from one server to another. After the sync process, I found the files are bigger than original ones.
I looked up the web and found someone mentions the rsync daemon. So I have to run the daemon on one server before I run the rsync?
The command I used is rsync --partial --progress -r source destination
View 1 Replies
View Related
Jul 21, 2010
use rsync to cp such files and dirs under /var/www/html/mydir directory but these two files(/dir4/1.html /dir4/2.html) cant rsync to dest mechine.
rsync configure file,below...
View 2 Replies
View Related
Dec 8, 2010
I'm using Ubuntu 10.04 LTS server and Postgresql 8.4. I have a .sh script that is run by cron every other hour. That works fine. The .sh script includes an rsync command that copies a postgresql dump .tar file to a remote archive location via ssh. That fails when run by cron; I think because it is (quietly) asking for the remote user's password (and not getting it). I set up the public/private ssh key arrangement. The script succeeds when run manually as the same user that the cron job uses, and does not ask for the password. I am able to ssh to the remote server from the source server (using the same username) and not get the password prompt (both directions), so why doesn't rsync work? I even put a .pgpass file in the root of that user's directory with that user's password, and the user/password are identical on both servers.
I think the problem is rsync is not able to use the ssh key correctly. I tried adding this to my script but it didn't help.
Code:
Here is the rsync command embedding in the .sh script.
Code:
Here is the cron entry:
Code:
View 6 Replies
View Related
May 9, 2010
upgraded from karmic through update managerANDnone of of my external drives cd drive or flash drives are picked upad to go back to karmic and will remain there for a whil
View 9 Replies
View Related
Jan 18, 2010
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
View 4 Replies
View Related
Jun 21, 2010
I recently had issues with the latest version of the Linux Kernels and I got that fixed but ever since that has happened none of my Drives will mount and they aren't even recognized.
View 1 Replies
View Related
Jan 28, 2010
i have recently setup and installed Ubuntu 9.04 on a virtulal drive usingVMWare 6.04, installed the desktop gui as well, I need to add other drives for data and loggng, which I did in the VMWare side. I can see the 2 drives in ubuntu, but can not access them, I get he unable to mount location when I try. How can resolve this please as I need these to virtual drives to be used as data drives.
View 1 Replies
View Related
May 1, 2011
I've used it once before but got fed up with the boot asking me everytime I turned my laptop on because I wasn't using it enough. I have Windows 7 on drive C . I want to keep it on drive C. I have several 1.5TB+ drives, and one of them is not being used. I want to dedicate it to Ubuntu, and be able to do a dual boot with my Windows 7 install. Is this possible? If it is, what about when this drive is not connected to my laptop? Will that mess up the boot process?
View 2 Replies
View Related
Jun 9, 2011
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
root@wolfden:~# cat /proc/mdstat
Personalities : [raid0] [raid10]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
[code]....
View 2 Replies
View Related
Mar 26, 2011
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
View 10 Replies
View Related
Oct 18, 2010
I suspect this is not new but I just can't find where it was treated. Maybe someone can give me a good lead.I just want to prevent certain users from accessing CD/DVD drives and all external drives. They should be able to mount their home directories and move around within the OS but they shouldn't be able to move data away from the PC. Any Clues?
View 2 Replies
View Related
Jan 8, 2010
So, at the moment I have a 7TB LVM with 1 group and one logical volume. In all honesty I don't back up this information. It is filled with data that I can "afford" to lose, but... would rather not. How do LVMs fail? If I lose a 1.5TB drive that is part of the LVM does that mean at most I could lose 1.5TB of data? Or can files span more than one drive? if so, would it just be one file what would span two drives? or could there be many files that span multiple drives drives? Essentially. I'm just curious, in a general, in a high level sense about LVM safety. What are the risks that are involved?
Edit: what happens if I boot up the computer with a drive missing from the lvm? Is there a first primary drive?
View 10 Replies
View Related
Jul 5, 2011
I have Fedora 14 installed on my main internal drive. I have one Fedora 14 and one Fedora 15 installed on two separate USB drives.When I boot into any of these drives, I can't access any of the other hard drives from the other drivesll I can, but just the boot partitions.Is there any way of mounting the other partitions so I can access the information?---------- Post added at 12:42 PM ---------- Previous post was at 09:34 AM ----------I guess even an explanation on why I can't view them would be good too.
View 7 Replies
View Related
Aug 23, 2010
I have an email server that I think is about to have a hard drive fail. It is running an old install of Redhat 9.0 I think. It has 2 120gb hard drives mirrored as a raid1. I want to copy those to a new pair of 500gb hard drives again as the same disk raid1 mirror. What tool would work for this? DD or partimage? Would it all be exactly the same and boot up still?
View 3 Replies
View Related
Apr 8, 2010
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
View 1 Replies
View Related
Aug 10, 2010
I have a Centos 5.5 system with 2* 250 gig sata physical drives, sda and sdb. Each drive has a linux raid boot partition and a Linux raid LVM partition. Both pairs of partitions are set up with raid 1 mirroring. I want to add more data capacity - and I propose to add a second pair of physical drives - this time 1.5 terabyte drives presumably sdc and sdd. I assume I can just plug in the new hardware - reboot the system and set up the new partitions, raid arrays and LVMs on the live system. My first question:
1) Is there any danger - that adding these drives to arbitrary sata ports on the motherboard will cause the re-enumeration of the "sdx" series in such a way that the system will get confused about where to find the existing raid components and/or the boot or root file-systems? If anyone can point me to a tutorial on how the enumeration of the "sdx" sequence works and how the system finds the raid arrays and root file-system at boot time
2) I intend to use the majority of the new raid array as an LVM "Data Volume" to isolate "data" from "system" files for backup and maintenance purposes. Is there any merit in creating "alternate" boot partitions and "alternate" root file-systems on the new drives so that the system can be backed up there periodically? The intent here is to boot from the newer partition in the event of a corruption or other failure of the current boot or root file-system. If this is a good idea - how would the system know where to find the root file-system if the original one gets corrupted. i.e. At boot time - how does the system know what root file-system to use and where to find it?
3) If I create new LVM /raid partitions on the new drives - should the new LVM be part of the same "volgroup" - or would it be better to make it a separate "volgroup"? What are the issues to consider in making that decision?
View 6 Replies
View Related
Jan 27, 2010
I want to use rsync in order to have a folder synced at startup with my fat32 partition. I figured out how to mount the fat32 partition automatically at startup but I'm failing with the rsync command.I came across this script but it works only the first time, when there is a new file to sync it fails.
View 3 Replies
View Related
Sep 20, 2010
I've bought a NAS (Western Digital My Book World Edition 1TB) which I want to use to make backups (preferably incremental) of some important files.
After much deliberation in finding suitable backup software it seems like good old rsync is the best thing for the job (backintime struggles to copy to remote locations?)
I have enabled SSH on the NAS config. I'm using the following command to do a test run on a small folder:
Code:
sudo rsync -azvv -e ssh /home/matt/Careers/ admin@192.168.1.100:/public/AutoBackup/
And I get the following error:
Code:
admin@192.168.1.100's password:
[Code].....
Anyone know where I'm going wrong here? I'm sure it's probably something simple but I can't crack it. I've tried variations on the destination folder such as admin@192.168.1.100:/AutoBackup/ without success.
View 4 Replies
View Related
Jan 18, 2010
I want to use rsync to synchronize some folders on my LAN. I have this working with two scripts; one runs at the beginning of my work session and gets the latest directory tree from the server, and the other runs at the end of the session to put any local changes back on the server. My "get" script looks something like this:
Code:
rsync -avuzb --backup-dir=/home/user/rsync_backup_dir --delete my_server:/home/bak/common_data/ /home/user/data
This works well, and with the "b" option any file that has been deleted from the master directory tree on the server will be deleted from the local machine and moved to the local backup directory. This is a safety measure to prevent the loss of files through a mistake (on my part).
The problem is the "put" script:
Code:
rsync -avuzb --backup-dir=/home/user/rsync_backup_dir --delete /home/user/data/ my_server:/home/bak/common_data
I want to run both scripts from the local machine, but the "put" script will not save deleted files to a backup directory. I tried using a remote backup directory like "my_server:/home/user/rsync_backup_dir" but this did not work. Is there a way to backup files deleted from a remote server from an rsync script run locally?
View 5 Replies
View Related
Mar 2, 2010
I saw in a magazine reference to using rsync to have identical copies of folders. This looks like something I could find useful as I have a large number of items in need of safe backup.
I have the folders on an old system on a home network and would like to copy these over to a USB Hard Drive.
Currently the folders reside on SFTP xxx.xxx.xxx.xxx and I wish to sync them to a USB port on my laptop.
View 9 Replies
View Related
May 15, 2010
I setup keyed ssh between two of my computers on my lan. It was working great. I used Ubuntu's Passwords and Encryption Keys tool to generate the key.
Yesterday I tried rsync'ing a grip of files to the ssh server and couldn't get it to work, I figured I was just messing the rsync command up. Today I just tried to log in using ssh and it just hangs there after I type the ssh command.
With verbosity on it stops at checking the blacklist file. Does this mean it decided my key is one of the badly generated ones? Why would it decide this now I thought it was supposed to catch that when generating the key.
Also I just checked that file it's trying to check doesn't even exist. Should it?
Code:
View 1 Replies
View Related
Feb 14, 2011
I'm currently trying to have crontab to automatically backup files from ramdisk. It works perfectly when I run it myself by simply cd:ing to scripts directory and type ./save_world.sh.
The problem is, that crontab DOES (at least it looks like it) run that command every one minute. /var/log/syslog does show it executing that line every one minute without any errors. I'm currently very confused what I did wrong here. I have tried rebooting, fiddling with crontab line, tried sudo crontab -e but nothing seems to work.
My script is called save_world.sh and it is located in /home/phoe/minecraft/rpg/
Code:
My crontab -e has one line and it is following:
Code:
I haven't determined any specific time yet, because I'm just trying to get it work first.
Snippet from /var/log/syslog:
Code:
View 4 Replies
View Related
Aug 2, 2010
I have a samba share to a windows 7 computer I do not know if I will be able to use backintime or not so I want to know how to have rsync do my backup.I read the man but I'm not sure if I understand the it.on same computer different hard drive to run every hour in a script. Leanne is windows 7 share and backup is the other hard drive in the computer rsync -arvRzEP /media/leanne /media/backup.
View 1 Replies
View Related
Mar 15, 2010
So I just used rsync to backup about 400gb of data to my NAS. Look just over a day to complete, which is what I figured. I decided I should run rsync again to see how its going to handle comparing and only adding new files to the remote location. So I added a few new files and then ran the backup again. Well rsync is trying to do a complete copy of all of my original data, even though they have not changed.
Is there a way that I can tell rsync to compare the two directories and only add the new files and delete the ones that are no longer in the original location?
Here is command that I am running:
Code:
sudo rsync -azvv --progress --stats /media/sda1/multimedia/movies /home/codeblue/NAS_Share_Point
View 9 Replies
View Related
Mar 23, 2010
I've been trying to get my headless Ubuntu 9.10 server to back up files from my Windows XP box and onto a 1 TB Seagate FreeAgent Go USB drive (which is connected to the Ubuntu server). I've tried two different methods, both of which are behaving strangely and not quite working. I'm using SSH to access my Ubuntu server.
The /etc/fstab line for the USB drive looks like this:
Code:
/dev/sdb1 /media/usb-backup-windows ntfs uid=34,gid=34,umask=022,dirsync,sync 0 0
I'm using NTFS because I'd like to share this drive between Windows and Linux. The drive gets mounted fine and I can read/write files after booting up. The way I'd like for this to work is to have my Ubuntu box mount the Windows drive using CIFS. So I mounted the C drive using the following command:
Code:
smbmount //windowsxp/C$ /mnt/windowsxp/C -o directio,iocharset=utf8,noperm,nounix,ro,credentials=/home/user/.smb/passwords.conf
The mount works fine. I can browse directories under /mnt/windows/xp/C, read files, copy files to Ubuntu, etc. So now I have my USB drive mounted and the C drive on my Windows box mounted. Should be good to go, right? Unfortunately, after several minutes (this varies, sometimes it can go an hour or so) of copying files using rsync --archive /mnt/windowsxp/C/ /media/usb-backup-windows/C/ (the actual command I use has more options - not sure if that's important), the server locks up.
The SSH session dies and I can no longer ping it. The server will eventually start responding after several minutes, only to lock up again a few minutes later, and so on and so on. When it locks up, the following messages end up in /var/log/kern.log:
Code:
CIFS VFS: Unexpected lookup error -26
CIFS VFS: No response to cmd 46 mid 59789
CIFS VFS: Send error in read = -11
CIFS VFS: server not responding
I did some Googling on these messages and came across a suggestion to set /proc/fs/cifs/OplockEnabled to 0. I gave that a shot but it didn't make a difference. I also tried plugging in a mouse and noticed that I could make the server respond immediately after a lock up by moving the mouse. I have to move the mouse though - just leaving it plugged in without movement doesn't help. I have to wait for the hang to occur and then move it. Once I do that, things progress for another few minutes.
This got me thinking that I had a lack of entropy and the mouse movement was kicking things into gear. So I tried moving /dev/random to /dev/random-chaos, and created a symlink /dev/random that just pointed to /dev/urandom. This didn't work - same exact behavior. So why in the world does moving the mouse bring the server back and cause it to start responding, if only for a few more minutes until the next hang?
I then gave up on this approach and tried connecting to an rsync daemon running on my Windows box (using Cygwin), instead of using the CIFS mount point. After getting the config file right and figuring out how to run it as a service on Windows, I started getting files copied once again. However, after what seems to be about the same length of time (several minutes to an hour or so), the rsync connection dies and I get the following message in the Windows rsync log file:
Code:
2010/03/22 13:01:01 [4024] rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Connection reset by peer (104)
2010/03/22 13:01:01 [4024] rsync error: error in rsync protocol data stream (code 12) at io.c(1539) [sender=3.0.7]
The Windows box is running rsync 3.0.7 and Ubuntu is running 3.0.6, but they are both protocol 30. The rsync error log on Ubuntu doesn't help much - it also says "Connection reset by peer". I've tried this at least a dozen times and it always fails with these messages. It's weird because it's always 4 bytes, never anything different. I also noticed that /var/log/kern.log had the following messages, although they do not line up with the times that rsync died:
Code:
usb 1-5: reset high speed USB device using ehci_hcd and address 2
I did some Googling on this message and tried some stuff that worked for other people. I added dirsync and sync to /etc/fstab. I tried setting /sys/block/sdb/device/max_sectors to 64. Neither one of those made a difference. unloading the ehci_hcd module and dropping back to USB 1.0. However, 9.10 doesn't seem to have that module loaded, so I'm not exactly sure how to turn off USB 2.0 and just try 1.0. I'm not real enthused with that workaround anyway because I have at least 500 GB to copy.
I've kind of run out of ideas here. It's frustrating because the entire reason I bought the hardware and set up Ubuntu was to run backups. I'm not sure if my problem is a networking issue (CIFS VFS server not responding, connection reset by peer), a problem with running headless (wiggling the mouse temporarily prevents the hang), a USB device problem (reset high speed USB device messages), or something else entirely.
View 6 Replies
View Related