Ubuntu Servers :: Recover From Backup Causes Error
May 3, 2010
I am running 8.04 Ubuntu server. Unfortunately a couple of days ago I thought I should upgrade as desktop upgrades usually go without a hitch and are very easy. I forgot that my server is live with a few websites and a radius server set up just the way I need (took painfully long time to figure out). Needless to say the upgrade caused many config file changes and many things stopped working. I panicked since this is a live server so I went straight to the backups to recover my system. I booted from a live CD and copied the entire system overtop the new one.
Everything that needed to work works, however now I get this message in my mail about every 10-20mins:
Subject: Cron <root@IMwebserver> [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -r -0 rm
Content-Type: text/plain; charset=ANSI_X3.4-1968
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/root>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=root>
xargs: xargs.c:443: main: Assertion `bc_ctl.arg_max <= (131072-204' failed.
Aborted
Googling it, I found that its a problem with findutils. I tried to reinstall findutils with no luck.
My backup script looks like so:
@daily /usr/bin/rdiff-backup --exclude /dev --exclude /tmp --exclude /var/run/cups/cups.sock --exclude /var/log --exclude /mnt --exclude /media --exclude /proc --exclude /sys --exclude /var/cache/apt / /media/removable/BACKUP/rdiff/
How can I fix my system so the above e-mail no longer occurs?
I am using software RAID in Ubuntu Server Edition 9.10 to mirror(RAID1) two 1TB harddrives. These are used for data storage and websites.I also have a 80GB harddrive for the operatigsystem. This drive has no backup or RAID at all. Should this drive crash and the system therefore to become no longer bootable, will I be able to recover the data the 1TB drives or should I backup the 80GB drive as well?
I rsync the filesystem where I have my server to another HD. Now, when I try to boot I'm dropped at initramfs with an error. It looks like it's still looking for the root in the previous HD even tough I already changed /etc/fstab. It says it can't find the device with a certain UUID, and that UUID is from the previous HD.
Here's the full details: I'm running Ubuntu server 10.04 It has 2 hard drives. Every night it backups one to another with the command
I moved the HD where I have the backup to another machine and rsynced them with the same command I then changed /etc/fstab in the new machine. I also installed Grub on it When I boot in the new machine I get a error about not finding root. It says that a device is not present. It says the UUID of the device is looking for, and it's the UUID of the first HD.
I thought I only had to change /et/fstab but seems I am wrong.
I have a scheduled backup to run on our server at work and since the 7/12/09 it has be making 592k files instead of 10Mb files, In mysql-admin (the GUI tool) I have a stored connection for the user 'backup', the user has select and lock rights on the databases being backed up. I have a backup profile called 'backup_regular' and in the third tab along its scheduled to backup at 2 in the morning every week day. If I look at one of the small backup files generated I see the following:
Code:
-- MySQL Administrator dump 1.4 -- -- ------------------------------------------------------ -- Server version`
[code]....
It seems that MySQL can open and write to the file fine, it just can't dump
I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy
Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing
but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user
Suppose I have a good backup of the / root filesystem. How do I recover the / root area? Suppose I have modified the root filesystem, perhaps I do an update some of the packages and regret it, and I want to get back to the system at the time of the backup. How do most linux people recover the root area of a system from a backup?
1) I wondered if I might put a System Rescue CD in and boot off it? 2) And then NFS mount the directory containing the backup? -In my case, I have made a good backup using rsync, to a directory elsewhere on the network. 3) And then, still booted off the System Rescue CD, mount the partition that contains the / root area in question? 4) Would I then clear or empty or delete the contents from the / root partition? 5) And then copy across all the files from the backup into the / root partition?
I ask these questions because of the (very nice) way linux OS is built entirely from packages... Am I being too complicated? (By comparison, I can see it is easy to recover user data.)If, instead, I simply recovered the backup straight onto the updated root filesystem, I wonder what it would look like if I then tried to verify it with "rpm -Va", for example? Surely, all the packages would fail the verification, because it would think it has a later version of each package from the update, but the actual files would have been overwritten by the earlier version from the backup?
I'm trying to recover a compressed mysql backup. As the backup is extremely large, I dont wanna decompress it before importing. How can I make a mysql variable take effect before I load this compressed file into the database.
I need to make a scheduled backup of repository of subversion in ubuntu. E.g., backup the repository at 13.00 pm every Monday. May I need to write some hook scripts to do that? And I also have to recover the backup of repository. If possible, I want to backup the trunk of repository my repository is project1 /project1 /trunk /tags /branches
Does anyone know of any decent enterprise level backup solutions for Linux? I need to backup a few servers and a bunch of desktops onto one backup server. Using rsync/tar.gz won't cut it. I need like bi-monthly full HDD backups, and things such as that, with a nice GUI interface to add/remove systems from the backup list. I need basically something similar to CommVault or Veritas. Veritas I've used before but it has its issues, such as leaving 30GB cache files. CommVault, I have no idea how much it is, and if it supports backing up to a hard drive rather than tape.
I currently have a group of 3 servers connected to a local network. One is a web server, one is a mysql server, the other used for a specific function on my site (calculation of soccer matches!).
Anyway, I have been working on the site a lot lately but it is tedious connecting my USB hard drive to each computer and copying the files. This means I am not backing up as often as I should...
I have a laptop connected to this same network that I use for development so I can SSH into to the computers, is there any software for ubuntu that can take backups of files that I choose on multiple computers? I know I could rsync but is there something with more or an GUI?
Then I can just every 2 days move the most recent backup from my laptop to the USB drive. Then I will have the backup stored in 2 places if things go kaboom somewhere.
I installed mediawiki the other day and went with the default innodb option. However a week later something went wrong. And since I have scripts that nightly backup /var/ I just copied the backup of /var/lib/mysql/wikidb/ (as I've done with MyISAM). Then when I connect the wikidb database. I can see the tables (via "show tables"), but when I do any query with them (check table X, select * from X) I get:
Code:
Table 'wikidb.X' doesn't exist I've since read that can can't just copy the database directory like MyISAM, and there appears to be no way that I can find to restore or fix Innodb, without a dump of the data. And I never got a chance to do a mysqldump of the data. So has anybody got any idea how I can at least view the "page" table from the files I've backed up in /var/lib/mysql/wikidb/ ?
I'm setting up Ubuntu 10.04 Server x64 on a Gateway DX4710. I installed on a 500GB SATA, using encrypted LVM, added webmin, and used ufw to configure iptables. All seemed fine.I then set up RAID1 on two 1TB SATAs. Using webmin, I created Linux RAID partitions on sdb and sdc. I then ran ...
All still seemed fine.I could see /data in the webmin filesystem list, and had ca. 1.4TB total local disk space.At that point, I decided that I really wanted an encrypted filesystem on /dev/md0. I also needed to tweak the fan setup. And so I shut down, without adding /dev/md0 to fstab. And it was probably still synching.Now /dev/md0 is semi missing. That is ...
sudo mdadm -D /dev/md0 => doesn't exist sudo mdadm -E /dev/sdb1 => part of RAID1 with sdc1 sudo mdadm -E /dev/sdc1 => part of RAID1 with sdb1
What do I do now? Can I recover /dev/md0? Is it just that I didn't add it to fstab? Can I just do that now? Or do I need to delete sdb1 and sdc1, and start over?
Ubuntu 10.04 on remote server failed after updating following modules: An update to grub-common from 1.98-1ubuntu8 to 1.98-1ubuntu9 is available. An update to grub-pc from 1.98-1ubuntu8 to 1.98-1ubuntu9 is available.
First updated a OK and second gave an error "wrong name" or something like that So the update stop the server and I did "Cold Reboot" which seems to work but I can't log in via "root" SSH only by "rescue mode" SSH however I am totally lost there.
I guess I need to finish the "grub-pc" or "grub-common" from prior update process. I can use rescue mode to effectively boot my server to live cd. I can ssh in to the live cd os and my original os is mounted for me to chroot in / backup / repair.
My server got crashed due to some power glitch i am somehow able to get the access to the /var folder i have copied all my webpages to a new system from /var/www folder, and i have install LAMP server but still i am not able to access the database from phpmyadmin.
I have copied the database to /var/lib/mysql..in phpmyadmin it has initially given there is no data base when i replaced the mysql folder but now after a restart it is giving me #2002 cannot log in to the mysql server even undoing the things is giving me the same error...
I have been hassling with this for several days now. I have 64-bit Ubuntu Server 10.04 running on an Acer Aspire EasyStore H340. I have windows 7 running on a 64-bit desktop pc and on a laptop. I mainly wanted to use the Ubuntu server for a file server, so I installed Samba and created three shares. These do show up in Windows explorer, and I can read and write to them. My windows applications seem to be able to see the shares and open & save files.
My next step was to try and set up a backup of the Windows 7 pc to the Ubuntu server. Windows integrated backup sees the shares and sub-directories within the shares, and the initial part of the backup seems to run OK, but when it tries to save an image of the 'C:" drive it works for a long time and then ends with an error (cannot complete backup).
So, I looked for some free backup programs to try, but these do not allow me to select the shares as a destination (invalid destination). The dialogue sees the drives I have mapped the shares to in Windows, but does not show any sub-directories, and selecting the mapped drive letter does not take as a destination. If I try to browse down through "Network" in the destination dialogue, it selects "Network," but does not expand it or accept it as a destination.
So, I partitioned, formatted as ext3, and mounted my 2nd 1TB SATA drive on the server, and mounted it as "storage." I set this up as share in Samba and gave everyone read-write access, but still no luck selecting it for a backup destination. After some Googling, I downloaded and installed Ext2Fsd-0.48 (a windows 'driver' for Ext2/Ext3). This installed correctly, but when I open the program, neither "Network," the shares, or the mapped drives show up anywhere.
Is it possible in dd to use it for the output file to be stored at some remote location. I do not have free space on the LVM partition whose backup I want to have via dd.
I have a file server running in an office that's mostly used for file sharing and a scanner saves pdf files to the server. I'm running the latest LTS ubuntu server edition and I really only have ssh installed and samba. My question is that I've done so much to the server as far as premissions and configuration and I'd like to make a clone of this to another computer and not sure how I would do this?
I'm not sure if clonezilla or something like this can perform this task? I basically just have a very old computer and now I have another very old computer that I want to make into a spare just incase something happens to the original. Any recommedations on how I would accomplish this?
After I installed the "GNOME Software Development" group with yum, I couldn,t log into the Gnome desktop. An error message says "Oh no! Something has gone wrong. A problem has occurred and the system can't recover. log out and try again", and it repeated every time I tried to log in. I created a file "gnome-shell.desktop" in the ~/.config/autostart directory, the error message no longer appears, but I get only a wallpaper which means no top bar or else. The file's content is as follows:
I'm looking for way to automatically backup a few machines to my server. Does anyone know a good guide to set this up? I want it to pull the files from the machines at a certain time every week.
i want to run a postfix server as a backup mx, but anybody knows how can i collect the fist server mails with this one? this is multipop action but how can i do it with dovecot or any other pop collector?
I have a Linux host acting as an ISCSI server for a Windows box. I want to keep an off site backup, so I figure rsync will keep the ISCSI server synced with an offsite Linux host. I understand that Rsync does block level incremental transfer to conserve bandwidth ok, awesome.The trick is, that I also want an archival copy kept. Say I want to go back to a revision of a file from 10 days ago, I need to be able to do that.
I was planning on using Backup Exec, since we currently have a licensed copy. Throw the archives from Backup Exec onto the ISCSI server as well, and have it keep a rotating 30 day backup, or something like that. The issue I see here is that this will be creating a deleting files as it does its daily backup rotation. I'm guessing RSYNC will see these as new files, and likely retransmit everything on a daily basis. The question then becomes, is this assumption correct, or will it still know to do a block level incremental transfer even when file names and such are changing?
I have two shares in total and there are also two external hard drives. The server is used by two different organisations that are not supposed to have access to the data of the other one(at least not as normal users).he script I need should run in the background of the server and when a drive is plugged in, it should check, which organization the drive belongs to, and depending on who the drive belongs to, backup the respective share.When the drive belong to neither, it should just do nothing.Unfortunately, I have no clue about scripting and so this makes writing a script like that, at least for me, impossible.So I wanted to know if somebody could name some good websites for learning to write such a script or give tips.
I support a small business which has an Ubuntu server running as a file server. The server is running Ubuntu 10.4. There is one hard drive which is mounted as /media/hdd. Each night this is backed up to an external USB hard drive mounted as /media/backup. The backup is carried out using the command:
Code: rsync -av /media/hdd/ /media/backup/
Is there a way to encrypt this back-up so that if the USB hard drive is plugged into another machine it cannot be read?
I want to make a daily backup of my websites from ubuntu server over ftp to another server I own. Backup schedule and process works, the problem is backup restore. Winrar says: The file is corrupt, 7-zip crashes.
Backup archive looks ok (the same size as original folder) and you can also extract it ignoring the error by winrar. But the extracted folder only contains one or two subfolder and one file(usually image) and that's all.
If I try to restore from Webmin it doesn't report any error, and it looks like restore worked. But restored files are nowhere.
I have a personal ubuntu server that provides apache, glassfish, firewall, routing, email, CVS, MySQL, etc.... This server has been running for a while with two hard drives configured into a RAID 1 array. The array has two partitions, one for swap and one for the data. I currently back up the data with a removable hard drive. I use dd and create an image of one drive and the MBRs (partition tables) of each drive.In a disaster situation I can use this data to recreate one drive and then re mirror it to the second, or just boot the back up.I like this solution because I can easily recover from bare metal, and the backup is transparent. I can browser it if needed since its an uncompressed image of the drive. The one drawback is that I need to reboot the system with a linux CD to do the backup.
My hard drive space is almost at capacity. So what I want to do is add a third drive to the array and migrate it to RAID 5. However this will cause my current backup method to no longer work. How can I back up this RAID 5 array. I need to back up the entire system, and not just the data. I have made many tweaks to the system over the years that it has been running that I can't lose if a restore is needed. I have seen a large thread here that people have been using tar. My concern with tar is how do you use a tar archive to restore a system to a new array. Im assuming that you would need to setup the array and then just restore the archive? Also, i don't have much faith in using tar on a running system. Doesn't this open yourself up to corrupted backups? My second idea is using rsync. While I consider myself experienced in linux from 10 years of personal and professions use, I have not had much experience with this utility. Would rsync provide a more reliable way to backup a running system that would enable a bare metal restore later? I once read something about people using rsync with hard links to create a backup that could store many incremental backups.My main concern with both rsync and tar is not being able to restore the OS to the state that it was in at the time of the backup.
I'm going to make a nightly backup copy from one server to another, using rsync. If I have a sufficiently large file, say 4+ GB or so, I'm not interested in copying the whole file if only a small change has been made. Can rsync detect small changes on block level and backup only those if needed?
I currently have an Ubuntu 10.04 Server with 10 2TB hard drives (Hot Swappable). I discovered that having a software raid over 16TB is not supported, so I split the drives into 2 sections and have 2 Software Raid arrays storing my movies, audio, pictures, and other software. The total current usage is around 7TB. Since backing the files up to DVDs or even BlueRay is laughable, I am going to backup the system to 2TB hard drives probably 4 of them, the problem becomes that I can only hook one backup drive at a time into the system using a hot swap tray. Now I know that I can do this manually by copying the files one at a time to the drive until it is full, switching the drive out and repeating this, but I am hoping for an automated solution, Start backup, plug first drive in, system fills up drive, swap and repeat. Also it would be nice if the system remembered what had already been backed up so when I add files to the system, I only need to attach the last drive and not start the process over.