I'm running rsync and outputting the log to a file using --log-file option. I am also using the --link-dest option, which in turn is adding the hard link creation output for EVERY file in the log file. I want to ONLY show the actual file transfer (if any) that take place.In the 'log format' section of rsyncd man page here I assume you can do this, I just can't make sence about the code.
When I run rsync --recursive --times --perms --links --delete --exclude-from='Documents/exclude.txt' ./ /media/myusb/
where Documents/exclude.txt is
- /Downloads/ - /Desktop/books/
the files in those directories are still copied onto my USB.
And...
I used fetchmail to download all my gmail emails. When I run rsync -ar --exclude-from='/home/xtheunknown0/Documents/exclude.txt' ./ /media/myusb/ I get the first image at url.
I am using rsync for incremental backups. I am backing up to a second hard drive on my computer. When I check the individual backup directories (backup.0 through backup.4) with du -hs they each show 12G; when I check the parent directory squeeze it shows 15G. Over 4 backups I have added 3G. I haven't made very much for changes to directories I'm backing up and am using hard links. I have included some info below.
Quote:
Backup script:
#!/bin/bash mount /mnt/backup cd /mnt/backup/squeeze/ rm -rf backup.7
Thought I'd post it here because it's more server related than desktop... I have a script that does:
[Code]....
This is used to sync my local development snapshot with the live web server. There has to be a more compact way of doing this? Can I combine some of the rsyncs? Can I make the rsync set or keep the user and group affiliations? Can I exclude .* yet include .htaccess?
CentOS 5.2 64bit 2.6.18-92.el5xen. Use rsync with --link-dest for nightly backups, works well. Was recently asked to start weekly backups to an external drive for off-site storage. The regular syncing works but hard linking seems to be ignored. So the backup is long with no space saving advantage. Here is an example of the command being run:
We run Ubuntu 10.04 Server for our solutions, but I'm having a bizarre problem with init.d boot scripts. I have a script for the Sangoma wanpipe drivers that I modified to add the LSB information so that "update rc.d wanrouter defaults" runs correctly. The symbolic links from rcN.d to init.However, when I reboot the system, all the rcN.d links have disappeared and wanrouter isn't automatically started!I've never seen this kind of behaviour from a Unix based system in 20+ years, so I'm baffled as to how to fix the problem.
I know how to make symbolic links on the same or between two different partitions. But is it possible to make symbolic links between two different servers, that are on the same lan?
I am trying to setup a virtual machine server as a web development environment. Install and setup is going correct. To avoid any accidents I have the apache alias set to www.example.dev instead of www.example.com. The URL will redirect no problem but I need to find a way to have every instance of a link (example.com) show up as (example.dev) so that whole site will function on the server without linking to the live external site. I'm using git as a version control system that will push certain commits to my live site and thus want to avoid changing any configuration files to get this desired effect on my virtual machine. How to do this server side, maybe via PHP, apache2.
I setup an rsync in cron to sync a master server with a backup server. however not everything gets copied over or removed. I believe this to be a permissions issue on the backup server. I am using a user we created specifically for this, and the rsync is setup to preserve permissions and delete files that have been deleted on the master..
Of course the ip address, user and port number is something else then above.. Is there a way to make the user that is initiating the sync into a superuser like root? That way everything on the backup server can be updated as necessary? I don't want to use root, Id like it to be a name I can keep relatively confidential.
At my Uni, we use a web-based login for our internet connections. Its based off of Cisco, and every Wednesday night every computer on campus must re-enter their credentials to use the network.
Normally on my several computers I simply pull up the Terminal, point links to google.com using
Code:
And enter my credentials when Cisco redirects to the login page.
Literally, the process is
Code:
Then ENTER to accept the redirect, down arrow to skip over the logo image, USERNAME, ENTER, PASSWORD, ENTER, ENTER.
Naturally, this is EXTREMELY time consuming, as I have about 5 computers located around campus and must physically walk to the machines and login every single week.
My question is, How would I formulate a program that does the following;
1) checks for connectivity (i.e. is able to reach/resolve to the greater part of the internet) and
2) automatically fills in the credentials on the links login page?
As far as I can tell, the server guides only explain a bit about what dynamic routing is, but not how to implement it.
My situation is this:
We require a server with 3 interfaces. One local, one to a vsat link and the other to a fibre link. The fibre will be the default route for Internet traffic but we want dynamic routing to automatically switch to the vsat link when the fibre link goes down (which happens fairly often in Zimbabwe!) and then switch back to the fibre link when it comes back up again.
The first option would be to handle dynamic routing on a Cisco router, but at the prices of Cisco devices here, it's not the most affordable option.
I'm setting up an ftp server with lucid server. A lot of the folders that should be accessible via the ftp are in different directories (and can't be moved without a LOT of hazzle) and I have to either symlink or mount bind them to the ftp chroot dir. Now I'm wondering which one is the saver variant? My guess is mount bind, but I'm not that familiar with the internal workings of linux and vsftpd (plus for symlinks I wouldn't have to change/create any scripts, just create them once...),
I have a 8.04 LTS server that i have installed a new 1TB drive. The server is running great but I am bit confused regarding the ln -s command and drive mounting. I have backuppc installed on the server and I am running out of storage space. To reolve this I moved the cpool and pool directories to the 1 TB drive and type from within the /var/lib/backuppc directory ln -s cpool /store/1TB/cpool this created the symbolic link to the new drive and everything works fine. I then rebooted the server and everything is runing fine but the drive does not show up in the df -h command, however the directories appear to be mounted fine.
I thought the drive would not be mounted automatically until it was defined in the fstab. Does the ln -s command force the system to automatically mount the directories but not the volume? This behaviour has caused me to delete my backup data becuase I was sure the disk was not mounted but is was!
I have an Actiontec GT724WGR and I am having problems with my Ubuntu server. I set up a subdomain on freedns.afraid.org with my main computer's external ip. However whenever I use the link that was made it goes to my router configuration page instead of onto my server. I have already set up a static ip for my server enabled DMZ hosting and under port forwarding applied every single rule that applied to servers.
I am running a headless Ubuntu server accessed through Webmin. The server is running 10.4.2 64 bit version. I have a number of cron jobs including a simple back-up job which is: Code: rsync -av /media/server/ /media/backup/backup/ All of the other jobs run fine but for some reason this job which is scheduled to run each day at midnight does not run. If I SSH into the server and run the job manually it works fine.
I have a personal wiki of notes, with now thousands of links in markdown format:
[link text](http://example.com)
but now that fckeditor is available for mediawiki (very beta), it has become much better to just stick with wikitext format. There are only a few conversions to do: tables, links, and bulleted lists. The lists are a fairly simple regex and fckeditor magically reformats the tables, so all I'm left with is the links. But I'm not a regex master. How do I reformat code...
I am trying to make my Apache server show symbolic links in a directory listing, but have so far been unsuccessful. In my latest attempt, I have placed the following code in .htaccess, in the directory with the symlinks that I want listing:
Code: <Directory /> Options All </Directory> Im httpd-vhosts.conf, I have also placed the following code within the relative <VirtualHost></VirtualHost>:
We are looking to centralize some code in a scalable environment. So some servers may do other jobs, etc. doing other things, but the key element is when the server is booted, I want it to rsync some config files, scripts, etc. local to the box and I am having an issue as things are a bit different from an RHE server.
I have a test entry in the rc.local a simple touch testfile, then after that a restore.sh file. I reboot the box, wait a minute then login. I see hte testfile timestamp updated yet the rsync doesn't fire. I can then manually run it and it works fine, but I need to have this go on boot.
I have a Linux host acting as an ISCSI server for a Windows box. I want to keep an off site backup, so I figure rsync will keep the ISCSI server synced with an offsite Linux host. I understand that Rsync does block level incremental transfer to conserve bandwidth ok, awesome.The trick is, that I also want an archival copy kept. Say I want to go back to a revision of a file from 10 days ago, I need to be able to do that.
I was planning on using Backup Exec, since we currently have a licensed copy. Throw the archives from Backup Exec onto the ISCSI server as well, and have it keep a rotating 30 day backup, or something like that. The issue I see here is that this will be creating a deleting files as it does its daily backup rotation. I'm guessing RSYNC will see these as new files, and likely retransmit everything on a daily basis. The question then becomes, is this assumption correct, or will it still know to do a block level incremental transfer even when file names and such are changing?
I support a small business which has an Ubuntu server running as a file server. The server is running Ubuntu 10.4. There is one hard drive which is mounted as /media/hdd. Each night this is backed up to an external USB hard drive mounted as /media/backup. The backup is carried out using the command:
Code: rsync -av /media/hdd/ /media/backup/
Is there a way to encrypt this back-up so that if the USB hard drive is plugged into another machine it cannot be read?
I have two servers. #1 is the main server and #2 is only used in case the first one is down. Both of them are in the same LAN. Both are debian Lenny configured the same way. For instance, if #1 has a problem, I simply disconnect it, set #1's IPs to #2 and I have my system ready. The only matter here is that there are plenty of files that need to be syncronized between #1 and #2. I thought that rsync was the answer to this problem I have.
I wanted to create a bash script that runs every day (with cron) and syncs the files I need (.conf, and other data). I used ssh-keygen to generate a pair of keys in order to login SSH without a password. The problem is that the permitRootLogin is set to yes in my sshd_config in both servers. So I can't log in directly as root by ssh. But I need to log in as root to be able to rsync the files between the servers because some of them are .conf files and aren't accesible for non-privilege users (Only root).
I'm using FC10 and I want to create a symlink to my movies directory in my home folder:
This is what I did: I created in /var/www/html ln -s /home/username/movies movies
Then in /etc/httpd/conf/httpd.conf DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory>
<Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>
<Directory "/home/username/movies"> Options Indexes FollowSymLinks Order allow,deny Allow from all </Directory>
Restart apache and then the test page is working.
The directory /home/username/movies has following permissions: drwxrwxrwx 2 apache apache 4096 2009-03-05 23:43 movies When trying to access my webpage at localhost/movies I get the 403 Forbidden Error. Ok then, entering: sudo -u apache ls /var/www/html > movies This works, sudo -u /var/www/html/movies returns the permission denied error. As well sudo -u /home/username/movies Is the user apache chrooted by default? SELinux is in permissive mode. What can I do?
I'm going to make a nightly backup copy from one server to another, using rsync. If I have a sufficiently large file, say 4+ GB or so, I'm not interested in copying the whole file if only a small change has been made. Can rsync detect small changes on block level and backup only those if needed?
I have several copies of a file set with different organizational structures, but the same files. i.e. On client1 files can be found in ~foobarfile1, ~foobarfile2, ~foo-avernfile3, ~foo avernfile4 On client2 files can be found in ~foo-barfile1, ~foo-barfile2, ~foo-tavernfile5, ~foo avernfile6 On client3 files can be found in ~file1, ~file3, ~file5, ~file7
I have access to one client and the server where I'd like all the files to be synced. I'm not worried about conflicts, just having a complete copy of all files[1-7]. Is there a way to cause RSYNC to remove the directory structure, so that I get something like: client1% rsync files server:backup client2% rsync files server:backup etc where at the destination all files will be checked against the destination set regardless of the source directory structure?
Im reading a lot on how to rync to an ftp server but none of the steps are telling me how to do it on servers that use normal authentication.Example I want to keep /var/www in sync with a folder on an ftp server in a folder called /cdn/.Id like to see all files and folders in sync, not just a compressed file etc
I've 15 web servers (in private network) running RHEL, Apache. Needs to sync web files between them. each server is accessible to each other via public key (with passphares).
1) Main server is web1 (where dev upload files initially). So I can make all other servers accessible by web1 without password/passphares and run rsync periodically to update files between them. But security is an issue here as all servers will become easily accessible.
2) Run rsync daemon in all other servers (except web1) on designated port and run rsync command from web1 to sync files. This will do the work but running daemon in all servers might increase overhead and making sure that daemon is running all the time etc. are my concern for this implementation.
i have an ubuntu server 4 windows client..i use putty or webmin. would like to copy some folders for example: "My houses"to be backup everynigth to the ubuntu server..can somebody give me an easy way for doing this with rsync and smb or cifs.
I've just noticed a small problem I am having with my company file server. When making backups to an external NTFS drive weekly I have noticed that the file names with thai characters are not getting backed up. I receive the below error:
rsync: recv_generator: failed to stat (to the file name...) Invalid or incomplete multibyte or wide character (84)
There are thousands of files on the server that contain thai characters in there names so how do I get around this problem so it will back up all files and not just the English character ones. I read somewhere that each file would need to be converted to a different character set but this would take years as there are so many files.
I have setup Rsync as a daemon on a Ubuntu 10.04 box and the setup was successful. Here are my configs
Code: root@hurricane:`# nano /etc/default/rsync # defaults file for rsync daemon mode # start rsync in daemon mode from init.d script? # only allowed values are "true", "false", and "inetd"