I have recently upgraded to Bugzilla3 and I wanted to restore my bugzilla database with my backup but when I attempt to tar -xvvzf file.tgz I get the error:
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error exit delayed from previous errors
My script that creates the backup is:
#!/bin/sh
datestr=`date +%m-%d-%Y`
bakdirpart="bugzilla.backup.$datestr"
bakdir="$HOME/$bakdirpart"
mkdir "$bakdir"
(cd /etc; tar cvzf $bakdir/mysql.conf.tgz mysql)
(cd /etc; tar cvzf $bakdir/apache2.conf.tgz apache2)
(cd /usr/share; tar cvzf $bakdir/bzreport.share.tgz bzreport)
(cd /usr/share; tar cvzf $bakdir/bugzilla.share.tgz bugzilla)
(cd /var/lib; tar cvzf $bakdir/mysql.hotdb.tgz mysql)
(cd /var; tar cvzf $bakdir/www.tgz www)
(cd "$HOME"; tar cvf "${bakdir}.tar" "$bakdirpart")
I have some issue with my amanda backup server, which is connecting with Scalar Quantum i500 via FC. I got the error as below 3 days ago. These dumps were to tape 000289. *** A TAPE ERROR OCCURRED: [No more writable valid tape found].
Normally I will load the proper tapes and run the amflush to push stuff from the holding disk to tapes manually. However this time amflush in this case did not help, Amanda immediately responded with an out of tape error again.
Meanwhile I got some errors from dmesg as well st3: Error 18 (sugg. bt 0x0, driver bt 0x0, host bt 0x0). scsi1 (0,3,0) : reservation conflict
I'm running a virtual machine of CentOS 3 and I am trying to decompress a tar file, but I run out disk space. I created the VM with 80 GB of disk space. When I look at the partititions, (du command) I have /dev/sda2 with a partition of 70GB mounted on /home with < 1% used.
Here comes the n00b question: How do I use the 70GB of space on sda2? I thought working in the /home directory, where sda2 is mounted, would give me access to that disk space, but the tar files fill up the /boot partition.
I have tried to plan my backup plans. As I want it simple I am gonna use only tar.gz combination of some files that are important. My question then is the following:
-I have a 100GB hard disk with 20Gb free space only. I would like to backup the rest 80Gb to an external hard disk. I run my scripts which end up saving a 75Gb(due to compression) to my external hard disk.
-->Then comes the times to try to see the contents of my archive (just to make sure that I can recover what is inside the 75GB disk file). Do you know if tar.gz needs to decompress the 75Gb file in some /tmp space in my hard disk for showing me the contents inside it? In that case it will not be easy at all to ever look at what is inside it in my hard disk, as there is no 80Gb of free space in my hard disk (20gb only).
iam trying to sync file server data into backup server machine by command- rsync -avu path/of/data ipaddress-of-backup-server:/path/where/to/save after running it ask for root password and manually it is successful.but i want to make it automatic.for that i also tried cronjob and also generated authentication key but iam not successful in login automatically..anybody know how to authenticate root to login for storing data in backup server.
I was doing a big backup system this morning (in Ubuntu 10.10) which was taking some time, when there was an urgent task I needed to see to, so I rashly stopped the backup with CTRL-C. Later I found that this left the recipient directory with an input-output problem, so I could not delete the truncated backup file (or any other file in the directory). I tried all sorts of chmod procedures etc., including using the usually very successful RESCUE CD from a USB pen, but nothing seemed to cure the input-output problem.
In the end, in desperation, I copied all the other files into a new directory, and then deleted the original directory from Windows 7. (I use a multiple boot system.) Windows obviously does not observe the same permissions as Linux, and it obeyed without demure! So all is now well.
Is there a way/command to back up all data from a Red hat Linux 4 serve[Including user rpofiles, data, group info, encrypts] either to a Red hat Linux 5.4 machine or as an Image file or manageable resource?
I just switch back to ubuntu after running the windows for about a 6 months again, new laptop, programs needed in windows, either way I'm back. What I had setup in windows were specific files that would auto backup to my samba file server when the network was detected. I'm looking to do the same in ubuntu now. Basically I'm thinking of writing a script to backup the files, only thing I'd be stuck with is how to tell the script to run when I connect to the network at home? Is their software already designed for this.
Through the Black Friday shuffle of getting new hardware, I now have a 500TB external drive, a 1TB external drive, and an old computer I want to set up as a home server. My family has a lot of photos that are currently stored on many different computers and are not backed up, I want 500gb of space for photos, and for those photos to be backed up. That would leave the other half of the 1TB drive for assorted things like personal backups, and general file storage. I know enough how to set up Ubuntu server edition on the computer, but the options on how I can set up the storage is stumping me.
To Recap, I have 1.5TB of storage total split 1TB/500GB. I want 500GB to be used for a central storage for the 10+ computers in my house(mostly using Windows) and that 500GB would be automatically backed up. The 500GB that's left would be used for non critical files, and wouldn't be backed up.
What is the best way of backing up the files? (script once a day that copies files? Some backup program?)
Would the 500gb drive be best for backing up to(having the 1TB be where people would put the pictures) or the other way around? Does it really matter?
Any tips on the cleanest way to have this work cleanly with Windows, Linux, and Mac? How well do photo programs(Picasa, Shotwell, iPhoto) like a setup like this? Is it possible to have different programs on different machines all reference the same file system without their automatic sorting(to folders, usually by date) messing each other up?
I have use Linux ( Suse, Ubuntu , Red Hat ) different times for different things. My newest goal is File Server. Here are the specs, I have already made the box, just choosing the OS here is what it needs to be able to do from order of most important to least.
Specs; CPU- AMD Phenom IIx4 945 Ram- 4GB DDR 3 1600mhz Mobo- Asus M4a79xtd EVO Video- ATI 4650 PSU- Corsair 650w Modular
I am administering a live web server i want to keep a backup of the access log file without disturbing the server performance. can anyone guide me how to to this. the size of teh log file run in GB so i will need to take a daily backup
when i use rsync command to backup my image file , it shows the following error message.
bash: line 1: /usr/bin/rsync: Argument list too long rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: remote command could not be run (code 126) at io.c(463) [receiver=2.6.8]
The command which i used is rsync -avrl -e ssh cms@server:/data/cms/data/images/* /mnt/Backup/Intranet_cms_backup/images
This script simply deletes files older than a certain age (in this case 7 days) from a certain location; I use it to purge old backups nightly, and it works as expected:
# delete backups older than 7 days find /mnt/backup/* -mtime +7 -exec rm -Rf {} ;
The problem is, every morning I get an email with an error message something like this:
find: `/mnt/backup/subfolder': No such file or directory
I have a CentOS5 server with a 1tb hard drive.There is only 80gb of data on that huge drive and now I want to make a bare metal recovery backup using AcronisMy question is, how can I estimate the amount of time the backup will take and the size of the image file? Is it based on the size of my drive or is it based on the amount of data on the drive?
Does anyone know of any decent enterprise level backup solutions for Linux? I need to backup a few servers and a bunch of desktops onto one backup server. Using rsync/tar.gz won't cut it. I need like bi-monthly full HDD backups, and things such as that, with a nice GUI interface to add/remove systems from the backup list. I need basically something similar to CommVault or Veritas. Veritas I've used before but it has its issues, such as leaving 30GB cache files. CommVault, I have no idea how much it is, and if it supports backing up to a hard drive rather than tape.
I have installed an application manager(monitoring application) on my linux server. Now, i need to have backup schedule for my application. The application itself has executive file to backup database.But when i put this file in my crontab to schedule the backup program it wont run!50 09 * * * root /opt/ME/AppManager9/bin/BackupMysqlDB.sh
In the past (with Wheezy and before) I often used "decompress" via double click on compressed folders or "compress" via right click on folders (or files) in Nautilus. Since I installed Jessie this option has vanished. I added several packages like "zip", "7z", "unzip" and so forth. Now I can do similar things via command line, but I just don't find any option anywhere to enable compressing and decompressing in Nautilus again. There seem to be no options for configuring such things in Nautilus.
I have the odd feeling my Jessie installation is broken since many little things are missing from the beginning. Should the old behaviour of Nautilus be standard in Jessie also?
How to compress a PDF document (open in vim, hold down D for a few seconds) and that's worked, but now the document won't open anymore. How do I decompress it?
I need to recreate my initrd.img after having extracted its contents. Bash by itself; pointing me to similar threads in this forum and google are useless to me and a waste of everyone's time as that has all failed. I need a working example. Apparently, I am supposed to use this bash command (s): "zcat ../initrd.gz | cpio -i -d." The preceding command is unintelligible to me. I cannot compress the initrd.img file and folders back into an initrd.gz file with a compression level of 9, so that I can rename with a .img extension.
My understanding of recompressing folders back into the initrd.img: Google and this forum all point to bash involving either zcat or cpio and then gzip with a compression level of 9. However, I require exacting instructions for using these commands to compress the folders that have been extracted from the initrd.img back into one homogenous initrd.gz archive so that that the created initrd.gz can be renamed initrd.img
Note: posting bash without that an example is a waste of everyone's time as I found that on Google and it was useless as I lack the requisite computer science degree or years of Linux guru experience needed to figure out how to specify the arguements proprerly. What I need is a working example, not just bash.
Note2: To save time, the answer to why I need to edit the initrd.img is this: Two different utilities (based upon the same parent system & kernel) use the same initrd and the same file paths. When they are installed on separate partitions and the one farthest from the mbr is selected for boot, it will begin to boot and then switch to the one closest to the mbr, which results in a failed boot. If one is removed, the other boots fine, so it's not a menu.lst or a lilo config problem.
I have installed a linux server in my office to run 16 machines. Its main use will be a internal mail server but will be also running websites.
I have installed Ubuntu 9.10 server x64 and have got apache running.
I am looking for the simplest more robust solution for smtp, pop3 and imap. I have only ever used qmail before and found it a pain to configure and its getting old so I though I should probably try something new. I have not much experience with running pop3 or imap on linux so would love a suggestion on that.
I want to back up an entire Linux system on a 3Tb external Western DIgital USB3 drive.
I do not want to reformat it from what it is, apparemtly NTFS.
Is there a utility that can act like a file manager like mc, that will permit me to create an ever expanding (to 320Gb) TAR file that will retain all the original file permissions. I have had nothing but disappointment with Linux backup utils with a FAT32 external drive, and I am concerned if I just try an tar the entire drive at once, with around 3 million files, I might run out of memory.