Ubuntu Servers :: Cron Job Backup - External HD No Longer Mounted
Dec 23, 2010
I'm running a cron job every night to dump a MySQL database to an external hard drive. It works, however when I check on it the following morning the external is no longer mounted and the XFS log file is corrupted. If I run
Code:
xfs_repair -L /dev/sdf1
It works, but then I get these issues:
Code:
XFS: Filesystem sdf1 has duplicate UUID - can't mount
I can reset the UUID, but it's difficult to have to do this every day.
View 2 Replies
ADVERTISEMENT
Aug 28, 2010
I have a cron backup scheme in which I rsync, then tar, then copy files on my internal hard drives to an external (USB) drive. When it works, it works. But I often get a "Permission Denied" message for all of these tasks. how the external drive is auto-mounted so I edited the etc/fstab so that the owner of the cron job is also the owner of the external drive (I think. Unfortunately, I'm not at that machine right now (it's at work), I can't give the exact fstab line (I will post it as an update to this thread next time I am at the machine).) BUT, I still get times when the cron backup runs fine and other times I get the Permission Denied. This is a shared machine that is dual-booted, so what I *think* is going on is that when the machine is rebooted to Fedora, but nobody logs in, I get a Permission Denied for the cron backup. It seems like on days when someone has logged in as the main user and left without logging out that the cron backup runs fine.
View 8 Replies
View Related
Dec 20, 2010
Is possible to backup disk(whole) on remote server which is mounted? If yes how,
View 2 Replies
View Related
Mar 6, 2011
I'm writing a script to rsync some directories to external hdd for backup.
My external hdd gets automatically mounted to /media/backup1
My script then backs up predefined directories to /media/backup1.
I have added this script to cron to run once every day.
The problem is that in the case where the drive is not plugged in and the script runs, it backs up to my local hard drive, and since it is more than 70% full, it fills it up by duplicating that 70% onto itself.
I have taken the script further, to test whether /media/backup1 is mounted. If it is, the backup will run. If it is not, it will bail out.
I'm using the mountpoint program to test for mounts.
My script so far:
Code:
#!/bin/bash
if [[ `mountpoint /media/backup1` ]]; then
echo "filesystem mounted"
# The backup function. Commented out for testing.
[Code]....
View 9 Replies
View Related
Mar 7, 2010
I have a headless 8.04 server with 2 USB drives attached. I'm trying to move everything off the 1.5TB drive onto a few 300GB drives (so I can then use LVM on the 1.5TB drive and move everything back onto LVs.)I check the drives using 'fdisk -l'. They show up as sdc1 and sdd1. I mount them and start a cp operation.When I run fdisk again, the drives are no longer sdc1 and sdd1. Now they are sde1 and sdf1 (and of course, they are no longer mounted.
What could be causing this and how do I fix it?I need to fix this ASAP because the 1.5 TB drive seems to be going bad. Every few seconds I hear a big "click" as though the head arms are smacking against a stop. (This is the 2nd brand new 1.5TB drive that has started doing this!)
View 5 Replies
View Related
Jan 23, 2010
After a near miss with my 1.5TB, RAID5 file server, I have decided that I need to backup my data to an external hardrive periodically.I have been looking at rsync but the question I have is: Do I format the external hard drive in EXT3 (the sameas my fileserver) or NTFS?All my main machines are Windoze, but the file server is Ubuntu with a samba share.If my server ever went belly up, I would like to be able to access my data from the external hard drive. I guess if it's in EXT3 then windows would be clueless... I would either need to fix the server pronto or access it with a live CD or something.What would I lose if I used NTFS instead of EXT3? I think I would lose permissions and possibly ownerhsip information - are there any other issues?
View 3 Replies
View Related
May 18, 2010
using Back In Time to backup my home directory to a second hdd that is mounted at /media/backupThe trouble is, I can do this using Back In Time (Root), but not using Back In Time without the root option. This is definitely a permissions issue - it can't write to the folder, but when I checked by right clicking on the backup directory and looking at the permission tab, it said I was the owner
View 2 Replies
View Related
Apr 29, 2011
Can anyone tell me how i change the default domain name for cron?everything i cron runs it emails from and to user@com.com
this leaves me with a massive list of failed mails in postfix.i have mailto on my main crontab but i cant do it on all of them.
View 1 Replies
View Related
Sep 27, 2010
I have a secondary harddrive that is auto mounted through fstab at startup in the /mnt/media_drive folder. I have not had any problems out of it until today.
I was trying to get the computer to act as a PS3 media server. I installed pms-linux, couldn't get it working, removed that, installed ushare, didn't like that, uninstalled that and reinstalled pms-linux. Now the PS3 sees the folder, but nothing else does. When I ls -l the /mnt/media_drive folder, I get this:
Code:
derek@shop:~$ ls /mnt/media_drive -l
ls: cannot access /mnt/media_drive/audio books: Permission denied
ls: cannot access /mnt/media_drive/playlists: Permission denied
ls: cannot access /mnt/media_drive/Christmas: Permission denied
ls: cannot access /mnt/media_drive/DVD_temp: Permission denied
ls: cannot access /mnt/media_drive/OTR: Permission denied
[Code]...
View 3 Replies
View Related
May 4, 2010
i am very sorry if this has been asked before... i'm sure it has.. but i have searched all over the net looking for an answer and i still cant find it...
I have a really simple cron job script like this:
When i run this manually it works fine but when i run it from my ROOT user in Plesk as a cron task is always creates a file that is just 45 bytes. Why doesn't it work... I am running it as a root user.. so surely i must have permission to access the file?
View 7 Replies
View Related
Jun 10, 2010
I'm trying to set up a simple backup script with cron.
In "crontab -e" (and sudo crontab-e - I tried both) I enter "0 22 * * * /home/USERNAME/.backup.sh", with the hope that it will run the script at 10pm each day. The srcipt work fine if I run in a terminal. why it won't work? It's bound to be something obvious....
View 5 Replies
View Related
Jun 8, 2009
Maybe this is a MySQL question, maybe not...
I've written a shell script to back up a database.
But when I run it, it prompts for password even though the script provides it. If I'm doing this manually, it's not a problem, but I want to make a cron job to do it...
Here's the script: Quote: #!/bin/bash
set -xv
#First let's rotate the backup files...
/bin/mv /home/cabazio/someDB-3.tar.gz /home/some/someDB-4.tar.gz
/bin/mv /home/some/someDB-2.tar.gz /home/some/someDB-3.tar.gz
[Code]....
View 2 Replies
View Related
Jan 5, 2011
I'm having a small issue where the backup jobs that I set to run in the crontab of the backup user do not appear to be running. Here's how I set it up (with crontab -e as the backup user):
run amanda every night (check at 2:45 and backup at 3)
[code]...
View 5 Replies
View Related
Apr 3, 2010
I am trying to write as bash script in order to have backup done by cron on the webhosting server. I want all backup runs to have incremental number in front of them. I came up with an idea to store incremental number of a backup in txt file on a server so when next one runs is can check the number in the file. However I am having terrible issues. My script:
[code]....
View 7 Replies
View Related
Jun 15, 2010
This is Kishore and i am new to Ubuntu and SVN and please some one help me in creating a cron job for my svn backup every day at 10:30 pm I already created a cron job which looks like 30 10 * * * svnadmin dump /home/administrator/svnrepository >svn1 when i run command directly i am getting whole backup and it's size is 3.6 gb but when i run through cron job the backup size is only 9 mb. So finally my requests are 1. cron job for taking complete svn backup at 10:30 pm daily and 2. cron job to copy the SVN backup in to my windows system in d drive and this must be run every day at 11:30 pm.
View 1 Replies
View Related
Oct 24, 2009
What's a good cron script for backing up and zipping a directory of files, or multiple directories with files, to a backup directory on my server, on a daily basis?I found an easy to use mysql backup script, now I need to backup my site directory, but not all the directories in it. So I need a method in the script to omit certain directories from backing up, ie dirs that contain gigs worth of files.This seems like it should be one of the most common crons to set a server up with but two pages deep in google (and here) I have yet to find anything remotely resembling a solution.
View 9 Replies
View Related
Jan 19, 2010
I have a scheduled backup to run on our server at work and since the 7/12/09 it has be making 592k files instead of 10Mb files, In mysql-admin (the GUI tool) I have a stored connection for the user 'backup', the user has select and lock rights on the databases being backed up. I have a backup profile called 'backup_regular' and in the third tab along its scheduled to backup at 2 in the morning every week day. If I look at one of the small backup files generated I see the following:
Code:
-- MySQL Administrator dump 1.4
--
-- ------------------------------------------------------
-- Server version`
[code]....
It seems that MySQL can open and write to the file fine, it just can't dump
View 3 Replies
View Related
Jan 20, 2011
I am using Ubuntu 10.04 x86_64. I log in to the machine using nfs. For a problem with mounting my home directory, I had to copy all the contents of my home directory (including all configuration files) from a recent snapshot on to itself. That is, I did something like,
Code:
cp -r /home/user/user /home/user
All of my recent data and program configurations were in /home/user/user. So after the copy operation, I logged out and logged back in again to see that all my configuration and data was restored to what I wanted. But the problem is that now on my desktop I see hundreds of mounted volumes. These are coming from an hourly/weekly snapshot program. The tech support guys for my lab have suggested copying all relevant data to a backup and then deleting the home directory altogether. But I don't want to configure all programs all over again. I think I should be able to get rid of the problem by editing/deleting one or more desktop configuration files. I just don't know which ones. I tried looking around the gconf-editor but was overwhelmed at the amount of information on there.
View 3 Replies
View Related
Oct 15, 2010
Due to a disk crash I've had to rebuild my Debian Lenny system. For some reason I can't get my cron-fired backup scripts to run. They will run manually.
It looks like crond is not running. If I try to start it, here's what I get:
Pancho:/home/lloyd# /etc/init.d/cron start
Starting periodic command scheduler: crond failed!
MORE INFO:
lloyd@Pancho:~$ /etc/init.d/cron start
Starting periodic command scheduler: crond/etc/init.d/cron: line 54: start-stop-daemon: command not found
failed!
[Code].....
Clearly the problem failure to find start-stop-daemon is not the problem and I'm still in the dark.
View 2 Replies
View Related
Feb 9, 2010
we have a server that runs a backup (cron job) at 9:15 every night. When I log on in the morning I have mail message that gives me a long list of all the files that were backed up the night before. For a couple of weeks now, the mail message gives me an empty list. Yet, when I run the same job manually from a #prompt, it runs. I am not able to run this job with cron in the daytime because too many users are in it. I wanted to browse the tape to see if the backup is really failing to copy the files or if they are on the tape and the mail message is bogus.
Since the backkup was done with cpio instead of tar, I'm not sure if I can browse the tape with restore -i anyway.What would be the best way to browse the tape on /dev/rmt/1 without actually restoring anything ?This is an ancient DGUX system, not Linux, and I'm not a unix expert I just inherited this server recently, but a lot of things are very similar to Linux and it looked like this might be a good place to ask.
View 3 Replies
View Related
Mar 2, 2011
I own a CentOS 5 VPS. I typed crontab -e, and then I added the following line to automatically have my server backup mysql
0 * * * * mysqldump -u root -p password --all-databases | gzip > /home/dbbackup/database_`date '+%m-%d-%Y_%H'`.sql.gz
When I go in and look, it doesn't place any files in /home/dbbackup. When I run
mysqldump -u root -p password --all-databases | gzip > /home/dbbackup/database_`date '+%m-%d-%Y_%H'`.sql.gz
View 3 Replies
View Related
Jan 4, 2010
I installed a second HD, and formatted it to ext4. I gave it the "/backup" label. I am trying to figure out how to mount it so that I can run cron to backup my home folder onto it once a week. This is what the fstab looks like now
[code]...
View 9 Replies
View Related
Aug 14, 2010
Simple Backup no longer works. (Says backup is started in the background... then nothing) What are some good comparable alternatives?
View 3 Replies
View Related
May 10, 2010
Does anyone know of any decent enterprise level backup solutions for Linux? I need to backup a few servers and a bunch of desktops onto one backup server. Using rsync/tar.gz won't cut it. I need like bi-monthly full HDD backups, and things such as that, with a nice GUI interface to add/remove systems from the backup list. I need basically something similar to CommVault or Veritas. Veritas I've used before but it has its issues, such as leaving 30GB cache files. CommVault, I have no idea how much it is, and if it supports backing up to a hard drive rather than tape.
View 7 Replies
View Related
Aug 14, 2011
I would like to backup important files (totaling about 400GB) on my ext 4 RAID 5 array to an ext4 external hard drive over USB (external drive is mounted to /mnt. In the future I'd like to automate the process using rsync and cron so for now I'm using rsync to transfer the files. My problem is that using the rsync command like this: # rsync -Pr "/dir1" "/dir2" "/dir3" "/dir4" /mnt
rsync shows me the checks and transfers for awhile and then throws up an i/o error (wish I had a screenshot to show but I don't). When I ls /mnt I get a similar i/o error. I then check /dev for the drive and find that it no longer shows up. Originally the partition was /dev/sdc1. I tried unplugging the USB at this point, plugging it back in and mounting the drive back to /mnt, however it has now assigned it to (you guessed it) /dev/sdd1. I get the drive mounted and try the original rsync command again, hoping the first error was a fluke or some kind of one-time drive fart. This time it makes it quite a bit further and then throws up the exact same problem. Am I doing something terribly wrong here? As I said, I'm very new to bash so I'm not making some absolutely moronic, newbie mistake.
View 9 Replies
View Related
Apr 7, 2010
I use dual booting, vista ubuntu 9.10. I have just bought a new 1T external harddisk i have used it on windows to backup some files. Now I want to backup some documents in ubuntu, but the harddisk is not visible, I can't see it, ok I think the term in Linux is that it is not mounted. Is there something else one should do ?
Disk /dev/sda: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xf959a599
[Code]...
View 4 Replies
View Related
Apr 8, 2010
I tough that my computer could not mount external harddisk.Here is my mail about that subject.I use dual booting, vista ubuntu 9.10.I have just bought a new 1T external harddisk i have used it on windows to backup some files.Now I want to backup some documents in ubuntu, but the harddisk is not visible, I can't see it, ok I think the term in Linux is that it is not mounted..Disk /dev/sda: 640.1 GB, 640135028736 bytes255 heads, 63 sectors/track, 77825 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk identifier: 0xf959a599
Device Boot Start End Blocks Id System
/dev/sda1 1 1176 9437184 27 Unknown
/dev/sda2 * 1176 52188 409761792 7 HPFS/NTFS
/dev/sda3 52188 77826 205930496 7 HPFS/NTFS
View 1 Replies
View Related
Feb 23, 2010
I've got an external HDD from Iomega that I've been using for over a year and lately it's been doing this thing where it suddenly changes from /dev/sdg to /dev/sdh while the partition on it is still mounted, seemingly at random. I'm often (not always) able to look at the contents of the partition, but it gives me I/O errors followed by a list of files and directories.
I am using slackware64 13.0 running kernel 2.6.29.6. Below I've provided the relevant lines from dmesg (edited out irrelevant lines to keep post under max character limit.) I'm beginning to suspect NFS has something to do with this, but I can't even begin to imagine how.
When this happens I'm able to umount -l the partition and remount it using the new /dev/sdh1 but eventually the same thing just happens and the whole drive switches back to /dev/sdg. This problem persists between reboots. It's also happened with an ext3 filesystem, in fact I switched to ext2 because it kept telling me "Aborting journal" and I was afraid I would get a corrupted journal and perhaps a destroyed file system.
About 8 months ago I was doing some work at my brother's place and while down on the floor fiddling with cables I yanked this whole drive off the desk and it smashed into the floor (while spinning probably, I'm pretty sure this model is disk based.) I've been expecting to see some strange behavior since that happened, but it took a while and I can be certain this is related to that incident, although I'm rather convinced it is.
Code:
sd 5:0:0:0: [sdg] 976773168 512-byte hardware sectors: (500 GB/465 GiB)
sd 5:0:0:0: [sdg] Write Protect is off
sd 5:0:0:0: [sdg] Mode Sense: 34 00 00 00
sd 5:0:0:0: [sdg] Assuming drive cache: write through
[code]....
View 5 Replies
View Related
May 14, 2011
My external hard drive randomly unmounts in the middle of file transfers.
The system is Debian Wheezy with sda as single-partition ext4. I have an INEO I-NA212-J USB 2.0 enclosure with 2xHDD single-partition ext2 which are recognized as sdb & sdc.
When files are transferred from the external to sda, the system will unmount the external drive at random times. When that happens, "fdisk -l" no longer reports sdb & sdc, but does show the drives as sdd & sde and they can be mounted. I have to reboot the machine to have it see the drives as sdb & sdc again.
When transferring from sda to the external, the unmounting is less frequent but I get file corruption. For example, a large directory seemed to transfer successfully, but the result showed as a single executable file and the displayed file properties were just long strings of numbers. Deleting the file did not reclaim the space.
There seems to be no pattern to the failure. I have checked for file size, number of files, system uptime, transfer time, etc. I have so far not seen it happen while there is no activity. The problem is the same whether there are 1 or 2 drives in the enclosure. However, I have used USB thumb drives and microdrives (both vfat) without problems.
For mounting, I have tried both "mount" and "pmount", and for file management both Xfe and Midnight Commander.
View 3 Replies
View Related
Mar 3, 2011
How do you create a cron file that will regularly perform a level 0 backup once per month?
View 2 Replies
View Related