Debian :: File Integrity For Backups Of Home Data ?
Nov 17, 2010
I'm currently backupping our home data (pictures, videos, our CDs ripped to FLAC which I spent a lot of time to tag accurately ), totalling almost 300 Gb, on 2 external USB drives, one of which is meant to stay at a friend's. I left the factory msdos filesystem as it was, thinking it could be useful to be able to connect the drives to a windows machine with no problems. It's certainly useful to have «normal» data that I can take with me e.g. when visiting my family.
I'm simply using rsync manually, checking for suspicious changed or deleted file before commiting the change. I do that every 2 weeks or so.
Now I want to add a file integrity management to my backupping scheme: I want to be able to check that new data I'll be committing has not been tampered with (integrity check before updating tags on my main drive), and I want to be able to check that backupped data is still sane on my USB drives, especially if I need to recover from data corruption on my main drive.
Since I'm essentially mirroring the data, I thought run of the mill integrity software would let me just rsync the integrity database, and I'm done.
But after browsing through the docs of tripware, afick and the like, I fear they work only with absolute paths, so the database for my main drive wouldn't work for my USB drive, that's mounted elsewhere when I plug it in, obviously.
So, I feel I'm missing something. It looks to me I'm trying to solve a very common problem, how do people do it?
Did I miss a file integrity software that works with backups?
Is there a trick like using a symbolic link pointing to whatever file hierarchy I want to check, and have tripware/afick/... monitor that link?
Should I run a more elaborate backupping system than plain rsync? Which one? (Storebackup for instance looks promising since it involves md5 sums, but it's targetting a completely different problem, and I'm not sure I can use it at all for what I need.)
View 3 Replies
ADVERTISEMENT
Nov 27, 2010
A while back I installed Dreamlinux 3.5 Gnome edition using ext2. When I attempted to use the email address books I imported from the Dreamlinux3.5 XFCE edition, which had been ext3, I discovered that none of the email addresses could be mailed to. I had to manually type in the addresses.
When I reinstalled Dreamlinux 3.5 Gnome using ext3, the same backup files that did not work in ext2 now work just fine. The question is, was this a "broken data" problem caused by the switch to ext2 file system or something else? Has anyone else experienced this?
The mail program is Thunderbird.
View 2 Replies
View Related
Jan 6, 2010
I've been looking for a good data integrity test tool for linux, but I'm having trouble finding one. Basically I'm looking for an application that will generate a heavy I/O load to a raw device and then perform some kind of data verification on the device. I my case the raw device will be md raid5 array.
View 1 Replies
View Related
Jul 18, 2009
Is there a way, a command, or a program that can be used to test the integrity and/or to check the data on an external hard drive?
View 5 Replies
View Related
Jul 1, 2010
So basically I have 6x 1TB drives in a Linux raid 5, and I have 2x 1.5TB drives running standalone and 1x 2TB drive.I want to backup weekly, data from the Raid 5 array to these independent disks. I configured 3 directories, one for each disk.
data1 => 2TB drive with 1.1TB of valuable data
data2 => First 1.5TB drive 430GB of tech tools and Linux ISO
data3 => Second 1.5TB 46GB of Misc items.
[code]....
View 2 Replies
View Related
Jun 14, 2010
I installed Ubuntu 10.04 on the laptop and it looks pretty good. I currently run 9.10 on the main desktop and would like to upgrade to 10.04, by pressing "upgrade" in the update manager, but I have some questions before I do, namely about data loss.
If I upgrade, will stuff like Thunderbird keep my emails, FF keep its profile (cookies, bookmarks, addons etc..), the documents keep all the documents, I have an apache server installed with a few websites - will they still be there after an upgrade? I also have a virtual machine with windoze on, what about all the stuff in there and VMware itself?
Or, will I need to back everything up onto an external hard drive (not sure how to backup Thunderbird and FF), and then reinstall everything, and transfer all the documents, websites etc.. back over again??
View 3 Replies
View Related
Aug 12, 2011
I want to get a program to check (a) hard disk and (b) file integrity. I have had a file - ODS spreadsheet accessed using LibreOffice - that appears corrupt. I have drilled down and down and eventually identified a corrupt component image inserted in a sheet.
The error that is thrown by LO is a "Error saving
the document FILE: Write Error. The file could not be written."
error. The thread can be read here... http://thread.gmane.org/gmane.comp.d...fice.user/9266.
What has confused me is that my machine can not save the file once a sheet is deleted. It is specific to a particular file. The file is stored on n ext4 disk mounted by fstab. I also share the file using samba. Regardless of how I access this file I get the error. If however, I access the exact same file using OpenOffice 3.1.0 OOO310m11 (Build: 9399) from an old Windows Vista Tablet and I can manipulate the file successfully - no error is encountered. I can also access the same file from another Ubuntu machine via NFS and Samba and both work. I am confused. I am beginning to think my disk is damaged or something. How do you check hard disks to verify that the disk surface is intact, and that the expected file sum value matches the value stored in the Linux equivalent to the FAT table.
I am using Ubuntu 10.04 LTS and all hard disks are formated with ext4. All formatting and management of the disks have been done using the 'Disk Utility' in the System > Administration' menu. If I look in gparted it will not check the disk the file is on stating "eslabel: no such file or directory... couldn't find valid filesystem superblock...". The filesystem mounts OK however without any errors. Another harddisk on the same system currently being used for backup does not even have the menu option for checking the disk highlighted.
What is the best way of verifying the integrity of the filesystem, actual files and the underlying physical media? I should point out it is only this single file that currently displays any issues. I either have a damaged disk, corrupt file or LibreOffice has a very peculiar bug that only appears in a unique combination of events just on my machine. As part of my investigations I have been updating ubuntu, reinstalled LibreOffice from their website and checked all dependencies have been met.
View 3 Replies
View Related
Mar 25, 2011
create one tar.gz file that contains my /home, /etc, /root directory.
a) The process ended with a 88GB file size (which is ok) but with the following message.Code: tar: Exiting with failure status due to previous errors.I have searched a little but I could not find what went wrong.
b) What are the limitations of tar and gz for backups. Of course I fully understand that they can not be used for differential backups (if it is called like that)
c) Let's say that my backup will be a file of 100GB and I want to see the contents of the .tar.gz. In kde there is a program called ark. Can ark handle so big files? Does it use my hard disk (eg. /tmp) to uncompress the file so to show me its contents? It might be the case that might be the compressed file is much bigger than the left space on the hard disk?
d) How can I do an integrity check when my tar.gz file is created?
View 11 Replies
View Related
Nov 11, 2009
I would like to replace my aging Freenas box with a CentOS based NAS. I would like it to have the following features:
- SAMBA with Web admin
- BIND with Web admin
- Nice to have would be native AFP support for my MAC
Question 1: Is SWAT still the current Web-based config tool for SAMBA or is there something newer?
Question 2: Is there a decent, reliable, web front end for BIND?
Question 3: Is there a current AFP guide for CentOS, everything I am finding via Google is years old.
I would also like to build a second server that I would host off-site to sync my data to.
- I was thinking OpenVPN for the link between the servers. The "remote" NAS will have to be the one initiating the connection.
Question 4: Is OpenVPN the way to go or is there something better? (I need bidirectional communications)
Question 5: Is rsync still the way to go for the data sync or is there something newer which would be lighter and/or faster?
Just need some "current" advice overall - I think the last SAMBA box I built was 4-5 years ago so what I learned then may not be applicable today.
View 10 Replies
View Related
Feb 9, 2010
How to do an easy file integrity checking on fedora 11 ? just to make sure that the necessary core os files are not corrupted using rpm and yum.
View 2 Replies
View Related
Feb 24, 2010
recommend any utilities that would check the hard drive and its contents (if they are still good) following a power outage?
View 1 Replies
View Related
May 3, 2010
Dropbox will not start properly because my Lucid installation is on a SS HD (/dev/sdc) but my data, including my Dropbox folder is on an internal NTFS-formatted HD (/dev/sda), and I also have another internal HD for backups (/dev/sdb).
For some reason I can get the backups HD to auto-mount on startup, but not the data HD. My fstab file looks like this:
View 3 Replies
View Related
Feb 9, 2011
I've been using Brasero to burn iso images on DVD-Rs, and although the burns are successful (I know by testing them), Brasero doesn't seem to perform its file/image integrity check from Tools>Check Integrity. The Brasero plugin "File checksum" or "Image checksum" is checked (can't have both checked), and when I perform a manual check with an md5 file and its burned DVD,it"always" gives me, immediately, a success message: "the file integrity was performed successfully," even when I insert a different image with the previous md5 file! So, immediate response (doesn't seem to be checking anything, I know that a 4Gb DVD md5 checking should take some time), and success whatever the DVD image used with one md5 file. Something must be wrong or the feature is broken.
View 4 Replies
View Related
Feb 3, 2011
I've been a DOS/Windows guy for 20 years, and recently became a SW test lab helper. My company uses CentOS for a lot, so I've become familiar with it, but obviously not as comfortable as I am with Windows.
Here's what I have planned:
machine: Core 2 Duo E8400, 8GB DDR2, 60GB SSD OS drive, ATI 4650 video card, other storage is flexible (I have 3 1TB drives and 4 750GB drives around that can be used in this machine.)
uses: HTPC, Network Storage, VMWare server host: SMTP, FTP server, and Web server virtual machines
I've figured out how to do much of this, but I haven't figured out how to do backups in Linux. I've been spoiled with Windows, with the built in backup system so simple to use. I find myself overwhelmed with the array of backup software, and unable to determine which to use. none of them seem to do everything I need them to do, but some come close, I think. I'm hoping someone here can help me out in figuring out which program to use and how to use it.
Here is what I need the backup software to do:
1. scheduled unattended backups, with alerts if the backups fail
2. a weekly full backup with incremental every 12 hours
3. removing the old backups when the new full backup runs, I would prefer to keep 2 weeks of backups, but that's not necessary
4. a GUI would be preferable, since my arthritic fingers don't always do as I want them to do. I typo things a lot, and the label worn off my backspace can attest to that.
View 7 Replies
View Related
Nov 12, 2009
How do you get Rsync to do incremental backups rather than full backups? At the moment I have a script that will create a backup folder (if it doesnt already exist) then copy the source files into the backup directory with the command
rsync $VERBOSE --exclude=$TARGET/ $EXCLUDE --exclude '/Ls-wtgl1c8/**' -rt --delete $source/ $TARGET/$source/ >> $LOG_FILE
Target is where the files will be backed up to Sources is the dir(s) to be backed up Exclude files is the list of files not to backup
log file is where the output will be saved to. At the moment it only does full backups, but I would only like to do incremental, how would this be achieved? Am I missing out an option in the Rsync that is required.
View 9 Replies
View Related
Dec 1, 2009
As you know everywhere we create a new file ,immediately a backup file with suffix ~ will be created with it if we open the file , i deleted a file but always i have problem with its backup how can delete them with deleting the file?
View 7 Replies
View Related
Nov 25, 2010
I'm having some trouble to find a file system that allows me to backup my data to an external HD, and then access this data in other Linux (and sometimes windows and mac) machines.
The file system that I'm looking for must:
- have no user permissions: anyone can do anything with the data;
- have support to large files: I've used FAT for a while but it just sucks;
- maybe support access in windows and or mac with additional drivers;
- have journaling (or something of the sorts) to reduce the risk of data loss.
View 8 Replies
View Related
Aug 24, 2010
Does Debian support Hibernation? I have a home file server, I'd like it to hibernate (and use least power as possible) when not in use. To access remotely we'd use wake on lan. I couldn't find any how tos on the wiki.
View 2 Replies
View Related
Jun 28, 2010
I just wanted to count how many backups are successful/failed based on the file. My file look like following..Quote:
# Directory Backup was successful on server1
# Directory Backup was successful on server2
# Directory Backup was failed on server3
[code]....
View 4 Replies
View Related
May 29, 2015
I have been using Wheezy for two years, and everything was OK, so I decided to upgrade to Jessie. In fact i decided to perform a clean install of jessie, so I formatted the partition where Wheezy was installed, and the /boot partition, and I installed jessie in those partitions.
As I had done with Wheezy, I installed jessie in an encrypted LVM, and the installation was ok (well, almost everything was OK, because grub and plymouth were not working, but I will open another topic about it).
First thing I did after installing Jessie was editting sources.list in order to download a few programs (plymouth, firmware linux non free, libdvdcss2, gufw, menulibre). Moreover, I downloaded a few progrmas from the Debian install DVD (flashplugin, VLC, chromium, clamav). i did not perform a dist upgrade because i was not at home, and where I was I did not have a Wifi connection (so I was using my mobile connecion -USB ethernet with an android phone-. BTW I had just performed a hard reset I had not installed any apps after that, so that mobile phone was "clean").
After that, I created a desktop user account, and I rebooted the laptop . When i rebooted, I started to tweak my user account: I edited dconf, and the gnome shell theme; and I started my mobile connection to download three extensions for gnome shell (window list, simple dock and activities configurator. I had used those extensions with Wheezy, and I had never had any problems). Ten minutes after I started the mentioned mobile connection, I reveived a SMS as I had used over 300 MB. Gnome monitor showed that I have downloaded 300 MB, and the android native data usage app showed the same. I did not download any video or music neither watch any videos on youtube, dailymotion..., I did not visit any suspicious web
I had a look at the apt logs and I din not find anything significant (I was using a no sudo user account, so i was not able to perform a dist.upgrade), I had a look at the download, video, music, picture folders and i din not find anything. I tried to check the iceweasel cache folder, but there were so many subfolders I could not check everything.
View 4 Replies
View Related
Jan 3, 2011
I have 2 windows pc's in my home and an office computer that have my files strewn about. I wanted to have them all in one central location that keeps a backup copy, so i used an old machine to start building a file server. I installed debian 5.0 on the machine, command line interface only. I have gotten ssh working so that i can do all my work on the box from one of my windows pc's by logging in with putty.my current problem is how to easily use the box hard drive for storing my files in an easily accessible way. i'm still working on getting samba to work so that i could map the /home directory to a drive letter on my two home pc's, but i'd also like to access files from my work pc. Before i do that, though, i wanted to know if this is safe and secure to map a drive on a remote machine through the internet? Are there any other security concerns I need to be addressing by having this file server set up?
View 7 Replies
View Related
Jan 28, 2011
**Edit: path for mount was incorrect Distro Server: CentOS 5.5
Clients:
Fedora(latest)
OSX(latest)
Backround I am attempting to setup a server in my house mostly(for the first time) for backups and file sharing. It is important to me that file permissions are preserved. So its my understanding that I must use idmapd in order for this to work. As of now I'm only working with the linux distros while osx will be dealt with once these two work together. portmapper is up and running, along with lockd on both machines. Firewalls are also down on both machines for now. The server side was all setup using the GUI interface with no extra options selected. Problem When attempting to "mount -t nfs4 10.0.0.2/$sharedfolder /mnt" I get an error operation not permitted with no error printing in /var/log/message. If I use "mount -t -o nolock nfs4 10.0.0.2/$sharedfolder /mnt" it mounts just fine. Ive checked both machines multiple times to make sure that lockd is up and running. In the idmapd.conf file I the domain as "localdomain" for both machines but I doubt that is right; like I stated above this is my first attempt at a server. I'm assuming the problem is a whole missing step that involves some kind of id mapping server I need to setup.
View 5 Replies
View Related
Feb 28, 2010
I use Lenny, and was trying to mount a .iso image, supposedly a cd imagem.
[code]....
This is what I get from dmesg | tail:
debian:/home/zac/cscd# dmesg | tail
[ 1811.505199] floppy0: disk absent or changed during operation
[ 1811.505207] end_request: I/O error, dev fd0, sector 0
[code]....
I did a little research on the web and it seems that this file is not really a cd image, but simply data in a .img file. What do you think of that?
debian:/home/zac/cscd# file cscd3.iso
cscd3.iso: data
Some people recommend to extract the data via the dd command, but it didn't seem very safe for me to do that!
[URL]
is it possible to extract the data into a directory (instead of a device) using dd? This file is supposed to be a software. I wanted to run it on wine by keeping it mounted on a mount point in my file system. Does it make any sense to try to do this if the file simply isn't a cd image?
View 6 Replies
View Related
May 16, 2011
I am in the process of writing an rsync script to run unattended backups of my entire file system to another system located on my local network using ssh and password-less rsa keys.
I will absolutely will not use password-less keys with the root account and this is the limitation preventing me from accomplishing my goal because root is required by rsync to access the / tree and copy it to another location. I decided that if I compiled the script into a binary that I didn't have a problem with the password being contained within the binary itself but from what I've read there is no way to elevate to root and then back down to user level from within the script/binary.
I can create the script as the user and use chroot to make it owned by root but retain execution permission for the user but it will still cause the ssh login to be under root and therefore require either that I am there to enter my password or the use of password-less keys under the root account which I reiterate I will NOT do. Currently the script is executed by the user on the machine containing the files to be backed up.
View 9 Replies
View Related
Apr 17, 2010
how can i do user2 can't get the file /home/user1/www/wp-config.php by using an editor but the webserver can?i mean how can i disallow access on other user's directorys but allow only one? (www-data in case).
View 7 Replies
View Related
Feb 1, 2009
I tried to download Knoppix 6.0 iso, but it ran out of storage space. It was placing it into /tmp. Is there a way that I could have it placed in my /home directory, which is plenty big?
edhe@hebrews:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 935M 256M 632M 29% /
tmpfs 470M 0 470M 0% /lib/init/rw
udev 10M 96K 10M 1% /dev
tmpfs 470M 0 470M 0% /dev/shm
/dev/sda9 356G 1.5G 337G 1% /home
/dev/sda8 373M 11M 343M 3% /tmp
/dev/sda5 4.6G 4.0G 383M 92% /usr
/dev/sda6 2.8G 341M 2.3G 13% /var
View 4 Replies
View Related
Apr 6, 2010
For a project I have had to migrate from FreeBSD to Linux and I have decided to choose Debian because I have had good experiences with it.However, since my main development machines are completely offline I have run into a little bit of trouble.Is there a way to specify a .deb package and get a tool to recursively list (and fetch) all dependency packages that are not included in the base install and put them in a folder?
I do not like to be tied to the internet (Never a good idea) and so this would save me a heap of trouble (and journeys to an online PC)Once I have these folders containing packages, then I can simple cd into one of them, dpkg -i the relevent .deb and not have to rely on the DVDs or worry about connecting to an online repositoryLife will be good!Suffice to say FreeBSD can do this well so I strongly believe that Debian will be able to aswell but since it is not normally done, I havn't found much on google about it
View 8 Replies
View Related
Apr 5, 2011
I am backing up my debian server with rsnapshot which actually uses rsync to perform the backup. The backups are located in an external storage of size 1.4T .
[code]....
I tried to understand what this error message means and i founde that error code 12 : 12 Error in rsync protocol data stream I understand that when rsync find that a file on the target was changed , it will send only the block/blocks that contain the changes and in the destination rsync will create new file and not update the old one (new inod...) . I want to know if this error i get is due to full disk or perhaps it is some other factor
View 2 Replies
View Related
Mar 13, 2010
I'm trying to find a secure way to backup files on my Prod Server to Backup Server. It must be automated, so I will need to run a command with cron which will login to Prod Server from Backup Server and backup data. 1. Do you think it would be secure enough to do this by creating an passwordless RSA private key on Backup Server and adding it's public key to authorized_hosts file on Prod Server? I can't think of a way to Automate this without having to enter any passwords without passwordless RSA key. Is there another. more secure way? 2. Should I create a special user for backup, which will only have read access to all files in the directory that I am backing up? If so, How can I run a check that this new backup user indeed has read access to ALL files in the folder that I intent to back up? How can I ensure the backup process will not skip files due to some permission problem? 3. I'm thinking of using rsnapshot tool, which uses rsync.
View 10 Replies
View Related
Apr 5, 2010
I am trying to set a home file and print serverwhile learning about Linux at the same time. How ever I have hit a wall...I am trying to install samba but I have the following messages...
Urbie:~# apt-get install samba
Reading package lists... Done
Building dependency tree
[code]....
View 2 Replies
View Related