Server :: Sync File Server Data Into Backup Server Machine By Command- Rsync -avu?
Jun 21, 2011
iam trying to sync file server data into backup server machine by command- rsync -avu path/of/data ipaddress-of-backup-server:/path/where/to/save after running it ask for root password and manually it is successful.but i want to make it automatic.for that i also tried cronjob and also generated authentication key but iam not successful in login automatically..anybody know how to authenticate root to login for storing data in backup server.
when i use rsync command to backup my image file , it shows the following error message.
bash: line 1: /usr/bin/rsync: Argument list too long rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: remote command could not be run (code 126) at io.c(463) [receiver=2.6.8]
The command which i used is rsync -avrl -e ssh cms@server:/data/cms/data/images/* /mnt/Backup/Intranet_cms_backup/images
I've got 4 identical 1 TB drives and would like to use them in a software RAID configuration on my home server. I'm running Debian Linux using 'mdadm' utility to manage the software RAID. I don't know how much I've read is fact or dated or even false so I decided I would ask here to get help from people who know more about this than I do. This is essentially just a file server machine to store all my data so being that I've got four identical SATA hard drives, I was thinking about doing RAID level 5. I guess I'll start here and ask if that is the recommended level of RAID. I think RAID level 5 will be fine for my general server usage. My second issue is partitioning the four individual drives to get maximum performance / space from them. Basically just asking here how would you or you recommend I partition the drives? I was thinking about doing three seperate partitions per drive:
/dev/sda1 = 4 GB (swap)/dev/sda2 = 1 GB (/boot)/dev/sda3 = 995 GB (/)Now from that partition schema above, obviously all the types will be 'fd' for RAID and the partition for /boot is going to be bootable. My confusion is that I read Grub doesn't support booting from RAID 5 since Grub can't handle disk assembly. If /dev/sdx2 (sda2, sdb2, sdc2, sdd2) are partitioned for /boot (bootable), how would you guys configure this RAID to match up equally? I don't think I do a RAID level 1 on 4 identical partitions, right?
Is there a way/command to back up all data from a Red hat Linux 4 serve[Including user rpofiles, data, group info, encrypts] either to a Red hat Linux 5.4 machine or as an Image file or manageable resource?
I am working on linux server with below specifications.Linux EDT 2008 i686 i686 i386 GNU/LinuxWhile checking the status of the server using the command 'opmnctl status' and when server is down the output is not getting redirected to file.I m using the command as,opmnctl status > abc.txt.
I have installed a linux server in my office to run 16 machines. Its main use will be a internal mail server but will be also running websites.
I have installed Ubuntu 9.10 server x64 and have got apache running.
I am looking for the simplest more robust solution for smtp, pop3 and imap. I have only ever used qmail before and found it a pain to configure and its getting old so I though I should probably try something new. I have not much experience with running pop3 or imap on linux so would love a suggestion on that.
I am unable to use ncftp command I have defined all variables used. i have to copy the data to another server FTPS. When i am executing this command it is throwing error
ncftp -u : option unknown
I am copying total script what i am executing in my server. Please some one tell me is there any pistake in using the ncftp command , or tell me some other commands to copy data to remote server
I switched last summer from Windows (used it since Windows 95) to Debian. I'm using Debian Jessie for a couple of months now and I'm getting used a little.
There are problems here and there, but I can solved them with some reading on the web. Not really a big problem...till now
I run Debian 8.2 om my PC (PC1). Bought an older PC (PC2) that I want to use as a backup server.
I'm using PC2 only for making backups, after the backup I switch it off again.
So I installed Debian 8.2 (net-install without DE and with SSH) on PC2 and tried to configure it to let it work as my backup location. Made a public SSH key and exported it to the root account (no problem) and to the user account (sensdeb), but there was an error "Access Denied"
Gave the user (sensdeb) sudo-rights via visudo file
# User privilege specification root ALL=(ALL:ALL) ALL sensdeb ALL=(ALL:ALL) ALL
I installed rsync.
The problem is that Rsync only works when I use the root account.
I don know how to give the user sensdeb the rights so that I can use that account for my backup tasks. Now it's possible to sync with the root account, but that should not be the way to do it, I read many times.
This should be a quick one. I'm trying to backup a single directory and it's subdirectories on my Lucid Server to a freenas box across my network. This is what I'm using to do that rsync -r -a -v -z * --delete freenas: DSIBackups..It almost works perfectly except for one problem. When a file is deleted at the source, this command doesn't seem to delete it on the receiving end. I assumed that the --delete would do that but aparently not.
My rsync takes backup of everything from the differenct linux servers to my backup device which is 2 TB only .Since it takes almost full backup of source , it consumes space lot in the backupdevice. So i wanted to keep all my backup files of one month old latest files in backup device, it should remove all files more than one month data.
I am trying to use rsync & ssh to move a backup folder some computers to a server. I found a command that is supposed to do this, but I am having issues getting it to work.
I have try to sync my MySQL Server database and HLDS Data on lan,one is windows server 2008 and one is Ubuntu Linux 9.10 i have try to use the remote address(192.168.0.4:3306) but can't connect and say the error code is 10060 i have check the connect is normal and ok,the accont is can let any address to contro.
shed some light on what I am doing. I am wondering if I just havehings back to front.Server (MESH):Fedora 13Firewall ports open tcp 22(ssh), tcp 873(rsync)sshd service started
I'm looking for a most possible, secure solution to transfer data using rsync over Internet between 2 linux server. I have 3 option: SSH, IPSEC and Kerberos. Which one in your opinion should be most secure solution?
I have an old pc currently running ubuntu server 9.10. It was configured during install to connect to the home wifi router by a PCI ethernet card, which worked all well and good. However, at the moment I cannot connect to the router (I have moved the machine too far from it). I want to connect this machine (desktop) to the server so I can SSH into the box and backup some files. I need help creating a simple wired network connection between the two, as I have no clue as to where to start.
I already have an ubuntu backup server in my location and need this one server to be backed up remotely in another state. this other location is a helpdesk so there's a danger that they can gain access to confidential data. I'll be setting up this new server as an ftp server but need to set the ftp folder to only allow access to the backup server and me. Because its remote on the helpdesk side, they'll need some access to the file system but need to be completely blocked off from the ftp folder where all the data is at. How can I make sure I can keep them away from my data and still be able to retrieve or copy files over without permission issues between both servers?
we've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month. I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
I'm trying to rsync files and directories from a RedHat linux host(v 4.5 & 4.7) to a Windows server 2003R2 Standard Edition with cygwin running. I'm executing the rsync command from the cygwin shell. The transfer involves rsync'ing approximately 1 TB of data from the linux server to the windows server. After about 280+GB of data transfer, the transfer just dies.
There seems to be no particular file or directory that the transfer stops at. I'm able to rsync GB's of data from other linux hosts to this cygwin server with no problem. Files and directories rsync fine.The network infrastructure is essentially the same regardless of the server being rsync'ed in that it is GB Ethernet running through Cisco GB switches. There appear to be no glitches or hiccups across the network path.
I've asked the folks at rsync.samba.org if they know of any problems or issues. Their response has been neutral in that if the version of rsync that cygwin has ported is within standards then there is no rsync reason this problem should happen.I've asked the cygwin support site if they know of any issues and they have yet to reply. So, my question is whether the version of rsync that is ported to cygwin is standard. If so, is there any reason cygwin & rsync keep failing like this?
I've asked the local rsync on linux guru's and they can't see any reason this should fail from a linux perspective. Apparently I am our company cygwin knowledge base by default.
I want to run rsync on server A to copy all files from Server B when they are newer than 7 days.(find . -mtime -7) I don't want to delete the files on Server B.
Has anyone had any experience on using SUA(Services for UNIX Applications) rsync to "pull" files down to the Win2k3R2 server from a linux rsync host?I was trying to use cygwin rsync before until I found out from cygwin that the cygwin port of rsync was "flakey" and would fail intermittently for no apparent reason. cygwin suggested I use SUA or SFU for rsync services.
I've looked for/ am looking for any experience using SUA rsync to copy files down from a linux rsync host to the Windows host via rsync on the Windows host. Also, if you have done this successfully, do you have any pointers/caveats you can share on how you got it working? What I am basically looking to do is copy files and subdirectories of files from a linux host using rsync to some static location on a Windows server on a scheduled basis so that I can backup the windows server to tape using Symantec's Backup Exec application.
I'm doing it this way to avoid deploying the Remote Agents for either linux or Windows on the target hosts. As an alternative I've seen reference to a product called DeltaCopy that uses a native Windows rsync port with the native linux port of rsync to do what I need also.I realize this is not a strictly linux question, but more of a hybrid as I'm moving data to and from Windows and linux hosts. So, if this is too Windows-y a question, please say so and I'll withdraw my question.
i have installed nfs server on my redhat machine.when i want to mount shared data from client(suse)machine the following error occur. "mount.nfs: mount to NFS server '10.3.31.146:/home/usbtest' failed: System Error: No route to host" both machines ping each other successfully.
i setup the following script ro run each night (12am) as a cron on the main server:
if mount | grep -q '/home'; then rsync -ranv --delete /home/ 138.73.56.12:/home;
[code]....
the main server is running on a dell poweredge 2600. The script rsyncs to a virtualised duplicate running on a hp dl380. when i set this script up and begn running data started going missing from the main server. if new files had been created the files by staff they would go missing, if data was added to existing files prior to activating the script the new changes made to that file would be lost.i just cant understand why this happened. as soon as i turned the script off after a few days it was all back to normal but the data that had gone missing had gone missing.
i just wanted to know if this could be a disk read/write issue, was the script running too soon not allowing data to be written before it can be backed up? could it bee memory? i just dont know. another developed occured after a few days of all this happening, which was one of our hard disk from the main server started mis behaving and flashing amber (attention).
I am looking to update the data on new server from the old one. I am not that familiar with rsync as i am normally a windows admin. I used rsync 1 folder at a time from old ubuntu server to new ubuntu server. is there a method by which i can use rsync to sync data from multiple folders using ssh say over night? Also on my first datasync i had some permission errors when moving some files from users home directory. As root user how may i move these files? I am not trying to view the files just migrate them to new server.
I'm using rsync for a backup-sript at the moment and want to keep all files. The files are always unique, so I want to rsync without delete any file on the destination.
I've tried with --no-delete and --max-delete=0 but nothing seens to work. Is there even a possibility to do so?
Don't ask me why, but I need to back up a website with complete structure to a windows machine (so no tar/gzip - just an identical copy). I'm experienced with rsync, so I thought to do it that way. However, in the process I'm bound to lose my ownership/permission settings for each file and that will give problems when placing back certain files. Is there a way to either:
1. save those settings on a windows machine? 2. have an easy way to save the filetree with relevant information and a shell script to attach the info back when uploading files again?