Ubuntu Servers :: Best Way To Remote Backup Of Its Server Disk(mounted)?
Dec 20, 2010Is possible to backup disk(whole) on remote server which is mounted? If yes how,
View 2 RepliesIs possible to backup disk(whole) on remote server which is mounted? If yes how,
View 2 RepliesI'm looking for a free backup solution how work in client-server in both environments Linux(server) and Windows(client). in my case, i want to give a disk space quota in my Linux server for each remote windows client.
View 2 Replies View RelatedI already have an ubuntu backup server in my location and need this one server to be backed up remotely in another state. this other location is a helpdesk so there's a danger that they can gain access to confidential data. I'll be setting up this new server as an ftp server but need to set the ftp folder to only allow access to the backup server and me. Because its remote on the helpdesk side, they'll need some access to the file system but need to be completely blocked off from the ftp folder where all the data is at. How can I make sure I can keep them away from my data and still be able to retrieve or copy files over without permission issues between both servers?
View 9 Replies View RelatedI'm running a cron job every night to dump a MySQL database to an external hard drive. It works, however when I check on it the following morning the external is no longer mounted and the XFS log file is corrupted. If I run
Code:
xfs_repair -L /dev/sdf1
It works, but then I get these issues:
Code:
XFS: Filesystem sdf1 has duplicate UUID - can't mount
I can reset the UUID, but it's difficult to have to do this every day.
I'm writing a script to rsync some directories to external hdd for backup.
My external hdd gets automatically mounted to /media/backup1
My script then backs up predefined directories to /media/backup1.
I have added this script to cron to run once every day.
The problem is that in the case where the drive is not plugged in and the script runs, it backs up to my local hard drive, and since it is more than 70% full, it fills it up by duplicating that 70% onto itself.
I have taken the script further, to test whether /media/backup1 is mounted. If it is, the backup will run. If it is not, it will bail out.
I'm using the mountpoint program to test for mounts.
My script so far:
Code:
#!/bin/bash
if [[ `mountpoint /media/backup1` ]]; then
echo "filesystem mounted"
# The backup function. Commented out for testing.
[Code]....
Is it possible in dd to use it for the output file to be stored at some remote location.
I do not have free space on the LVM partition whose backup I want to have via dd.
using Back In Time to backup my home directory to a second hdd that is mounted at /media/backupThe trouble is, I can do this using Back In Time (Root), but not using Back In Time without the root option. This is definitely a permissions issue - it can't write to the folder, but when I checked by right clicking on the backup directory and looking at the permission tab, it said I was the owner
View 2 Replies View RelatedAttempting to create a backup script to copy files from one file system to a remote file system.
When I try this I get:
Quote:
# tar -cf - /mnt/raid_md1 | gzip -c | ssh -i ~/.ssh/key -l user@192.168.1.1 "cat > /mnt/backup/fileserver.md1.tar.gz"
tar: Removing leading `/' from member names
Pseudo-terminal will not be allocated because stdin is not a terminal.
ssh: Could not resolve hostname cat > /mnt/backup/fileserver.md1.tar.gz: Name or service not known
[Code].....
I know that the remote file system dir is RW and the access is working fine. I am stumped...
Does anyone know of any decent enterprise level backup solutions for Linux? I need to backup a few servers and a bunch of desktops onto one backup server. Using rsync/tar.gz won't cut it. I need like bi-monthly full HDD backups, and things such as that, with a nice GUI interface to add/remove systems from the backup list. I need basically something similar to CommVault or Veritas. Veritas I've used before but it has its issues, such as leaving 30GB cache files. CommVault, I have no idea how much it is, and if it supports backing up to a hard drive rather than tape.
View 7 Replies View RelatedI'm currently in the middle of developing an automatic system which can provision Linux VMs automatically.Let's say I have a disk image which has a Linux distro installed on it. How would I change the root password on that, without having to boot the OS?It would be nice if I could just simply run passwd with some switch to point to the /etc/shadow file on the (mounted) VM disk image..
View 2 Replies View RelatedI have 2 servers. One is an operational CentOS + cPanel server and the other is a blank CentOS server. I want to use cPanel's backup suite to backup our customers accounts but I want to do it so the blank server is mounted to the cPanel server, rather than an FTP backup. If that makes sense. I believe from previous experience that NFS is the way to go but I'm not sure.
View 3 Replies View RelatedHere is my brief hardware and software detail in my production environment : AMD Phenom X4 3.4gHZ (Over clock to 4gHZ, 8G of Memory, 1TB 7200rpm Harddrive, Running Ubuntu server 10.10.My web production environment were pieced together 3 weeks ago.Here is my dilemma. started out with less that 40 users and now hitting 4,000 unique users per day.Now I am thinking I need faster write to disk and backup of data so I am thinking about putting together a Raid5.
I preparation for this.I have bought a new motherboard, AMD Phenom X4 3.6, and 2 more 2TB 7200rpm (Currently, I have a 2TB 7200rpm not used much)Been digging around this forum for posts with raid setup but still not sure how to seamlessly moving the some 10Gig of data from my current running prod environment once I have RAID5 installed on this new machine via the LIVE Ubuntu Server CD.
This is not a regular backup. I only want to backup selective directories so personal files (photographs, documents, sourcecode) will be kept safe in case of a total system meltdown. This'll be 15GB max. Basically the digital variant of a fire resistant safe. I looked into duplicity but that requires me to install gpg keys on the target machine, which I can not do. I rather have a solution that just relies on just a working shell account and diskspace on the target server.
I thought of writing a simple script to do the following:
1. Mount remote server with sshfs
2. Mount encrypted container at remote server (LUKS, TrueCrypt?)
3. Loop over predefined directories on local machine and copy to encrypted container (rdiff-backup?)
Based on these requirements:
- Target server is "dumb": only ssh access + diskspace (i.e. no installing of gpg keys)
- Encrypted container should grow/shrink to fit contents
- Encrypted container should be easily decryptable on any OS if you have the password
- Once data leaves client server it should be encrypted: sysadmin on target server should never be able to see unencrypted data.
I have an external drive that I want to do backups to. Most times it goes great, other times the server gets real sloggy, and I do a 'df' and see I'm at 96% disk usage. What has occured is the disk failed to mount apparently, so the backup backs up to my local disk at /media/backups/
I have /media/backups in my /etc/fstab pointing to /dev/sdc1, but I think the external disk will sleep when not in use for long periods.
How do I make sure /media/backups is REALLY going to the external drive and not my local drive? Is there anyway to sort of test it BEFORE I write umpteen gigs to my local hard drive?
i found this video, and i really want to do the same. *newbie needs to learn [URL]...my question is, what need to be installed and how?
is there any specific configuration to make it works?
and will it work if i want to connect from Ubuntu to Fedora ?
I have installed Ubuntu 11.04 onto HP EliteBook 8540w notebook and would like to backup the entire disk using some popular backup tool.
I have searched in the internet and found the closest tool is PartImage. But the bad news is that it does not support ext4 fs!
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
This is a good way to backup my current system:
How to backup your system to restore your server incase the hard disk is damaged.
I have some issue with my amanda backup server, which is connecting with Scalar Quantum i500 via FC. I got the error as below 3 days ago.
These dumps were to tape 000289.
*** A TAPE ERROR OCCURRED: [No more writable valid tape found].
Normally I will load the proper tapes and run the amflush to push stuff from the holding disk to tapes manually. However this time amflush in this case did not help, Amanda immediately responded with an out of tape error again.
Meanwhile I got some errors from dmesg as well
st3: Error 18 (sugg. bt 0x0, driver bt 0x0, host bt 0x0).
scsi1 (0,3,0) : reservation conflict
i want to run a postfix server as a backup mx, but anybody knows how can i collect the fist server mails with this one? this is multipop action but how can i do it with dovecot or any other pop collector?
View 1 Replies View RelatedI support a small business which runs a headless Ubuntu Server (10.04 32bit) as a file server which is accessed by Windows machines.Although the company has it's own back-up procedure they have decided to back-up some (none sensitive) files online. The have chosen FileFactory (http://www.filefactory.com/) as the host for this. FileFactory allows files to be uploaded to their server by FTP however I do not know how to set this up on the server.
The idea, if it is possible, is to connect to FileFactory through FTP and then synchronise the data using an Rsync command.I normally access the server through Webmin and it has vsFTP installed. I can access the company's server by FTP from inside and outside of the network so I know that vsFTP is working for incomming connections however I cannot work out how to configure it to connect to the FileFactory server.
I got a Web Server running great. But I need to back it up. It doesn't for some reason detect either of my USB External HD's. How can I make a TAR or something and burn it to DVD and then be able to restore using that disk.
View 6 Replies View RelatedI am currently working on managing multiple linux servers in remote locations, servers particularly user for web hosting. I need to backup data to a backup server but rsync which i currently using doesnt helps is there any tool to backup every server with out modifying it bcos there are hundreds of servers so installing a tool in every server is time consuming process.
View 7 Replies View RelatedI administer several web hosting (combined with mail relays and other services) production servers under Debian GNU/Linux. I began giving these public services two years ago via three boxes: the first is a gateway which controls traffic via iptables (it's attached to a DSL modem) between a public subnet (the DMZ) and a local network which connects several workstations. In the DMZ subnet I maintain two Pentium-III era boxes, they've grown in services since I set them up. Actually, I think I should buy new ones, but, you know, I want to save money and lenghten its life.
So, they've grown in data hosted, but I've never implemented a resilent backup system. I've set up some rsync tasks sheduled via cron jobs to copy the entire UNIX file system in each of the DMZ boxes, but I'd like to be prepared before an unexpected "real" crash of some HDD, I mean, some problem that renders a disk unusable.
AFAIK, sysadmins sync entire HD backups which are capable of recovering a system via swapping the unusable unit with the backup unit. Maybe the best fashion is to implement a RAID mirroring the unit, I'm I right? So, keeping my systems as they are, I mean, capable of using 4 parallel ATA units, what would you do? Use dump, rsync or some other way to have an operational second unit with an exact copy in a bootable second drive, in order to quickly swap it if the main unit fails?
Comes to my mind to partition a second unit (so making it bootable) and backup daily via rsync only those parts of the unix file system hierarchy which are necessary to boot a system properly. What do you think about this workaround?
Does anyone know the best and simplest way to do this? I'd like the share to be mounted over the tunnel on boot with as little scripting as possible and be as secure as possible without exposing more than one port to the outside. I will be trying this method: [URL]... once the tunnel is established and 'always on' NFS would take care of the file system mount obviously. Lots of the information I have been reading is not up to date it seems. Does anyone have any experience with this?
View 1 Replies View RelatedI'm trying to setup a server at home, it has some practical implications, but largely it is just to take a stab at it. But I need the help of someone with more experience than I in defining exactly what I'm looking to do.
Here's what I have: old PC running Gutsy server connected to router. Several laptops at home connected via wifi to router. All laptops running either Windows or Ubuntu. Here's what I'm looking for: The server centralizes file storage for all clients. I would likely incorporate a RAID and some synchronised imaging of the files. I also want the server to create disk images of the clients hdd, regardless of client OS.There would also be some shares that would be publicly accessible (myself and friends accross the country would be able to access the same drive).
So I was thinking something like what corporate environment would be nice, you log into a profile that exists on the server. Like a dumb client...all data would be stored on the server. But I'm thinking that's more like a network boot and wouldn't work via wifi (or would it?). Also that wouldn't lend itself well to laptops used on the road in areas without net access. now I'm thinking each client would have its own locally installed OS, and they would just access networked shares. I could store sensitive files on the shares, but that wouldn't provide complete backup solution for each client.
Without rambling on anymore, anyone care to throw out some ideas? I'm really just looking to see if I can do what I want. The focus is on centrallizing files, securley backing up data and client OS's and ability to restore said images quickly.
I have a scheduled backup to run on our server at work and since the 7/12/09 it has be making 592k files instead of 10Mb files, In mysql-admin (the GUI tool) I have a stored connection for the user 'backup', the user has select and lock rights on the databases being backed up. I have a backup profile called 'backup_regular' and in the third tab along its scheduled to backup at 2 in the morning every week day. If I look at one of the small backup files generated I see the following:
Code:
-- MySQL Administrator dump 1.4
--
-- ------------------------------------------------------
-- Server version`
[code]....
It seems that MySQL can open and write to the file fine, it just can't dump
I am using Ubuntu 10.04 x86_64. I log in to the machine using nfs. For a problem with mounting my home directory, I had to copy all the contents of my home directory (including all configuration files) from a recent snapshot on to itself. That is, I did something like,
Code:
cp -r /home/user/user /home/user
All of my recent data and program configurations were in /home/user/user. So after the copy operation, I logged out and logged back in again to see that all my configuration and data was restored to what I wanted. But the problem is that now on my desktop I see hundreds of mounted volumes. These are coming from an hourly/weekly snapshot program. The tech support guys for my lab have suggested copying all relevant data to a backup and then deleting the home directory altogether. But I don't want to configure all programs all over again. I think I should be able to get rid of the problem by editing/deleting one or more desktop configuration files. I just don't know which ones. I tried looking around the gconf-editor but was overwhelmed at the amount of information on there.
I have a compressed backup that I want to crypt and upload to a remote server through ssh once in a while. The problem is with the size, more than 4 GB. If the connection drops how does scp know to resume? This should be an automated process.
View 8 Replies View RelatedI am running a local webserver mainly for development. I have everything set up. The issue I currently have is that I cannot shut down the machine from the command line. I can issue the command but the machine remains on. I also cannot get to the desktop via VNC.The reason for this, is that there is no monitor attached so Ubuntu says that it is trying to run in Low Graphics Mode. I found this out as I plugged the monitor in to my server.So question is, how do I get around this? How can I set up Ubuntu to get past this, or do I need to install Ubuntu Server?
View 3 Replies View Related