basically want i want to do is copy my whole file system to a different hard drive, then reconfigure my partitions and copy it back. then reconfigure grub.
the reason i want to do this is when on dual boot i gave it only 70gb of space and now i want to add 300 more. and since the 300gb of space is a primary partition and this is a secondary i cant extend them or combine them.
so what i want to do is. sudo cp -rP / /home/me/sshfs-folder
I want to copy a file system with dd. Most of the forums and how-to's warn that the file system must be unmounted. Otherwise, data could be changing and inconsistent or, at best, there may be open inodes, and the copied file system would need to be recovered the first time it is mounted. However, is it sufficient if the file system is mounted read-only? Or does a file opened with read-only still result in an open inode?
I am having a bit of a problem with my Ubuntu Server 10.04 install. I think it might be a kernel problem. Basically, what happens is when I copy a large file (a 160GB disk image) to my drive (>60GB) the system consistently crashes after about 60GB of the file is transferred. It doesn't matter if I am sending the file using cifs, or over SSH. Checking syslog (paste dump here), it seems these flush errors always appear shortly before the crash occurs. The destination filesystem is a hardware RAID 10 array with 2TB of space. It is formatted as EXT4.
System76 laptop, 10.04, 320GB HDD, VMware with Win7 in one VM; want to use Clonezilla as am using it to back up (bare metal backup image) another older smaller dual-boot Ubuntu/XP machine. This System76 laptop is a work machine that I control; the Win7 VM only does a couple of things but they're necessary for work and I don't want to lose the configuration. The reason for the bare metal backup is so if I have to, I can restore and get back to work - something I've had to do on some previous occasions back when I used Windows. Data is no problem - I back that up separately on an hourly basis.
My question is what FS to use on the backup drive; for instance, for the dual-boot XP/Linux work machine I'm currently backing up, I'm using a 30GB external HDD formatted in FAT32. That's OK because 30GB is below the limit for FAT32. But for the newer laptop I'll need a much bigger backup partition. I chose FAT32 for the old one because I know everything on the computer being backed up, Windows & Linux both, is compatible with it. But what FS should I use to back up the new laptop, considering that I'll be backing up the Win7 VM as well as the main Linux part of the machine? I plan to use a backup partition of about 160GB. Could I format it NTFS and have it work with Ubuntu 10.04? Or, conversely, if I format it EXT, will it back up the Win7 VM OK?
O/S: Fedora 12 I am newbie in linux. What I want to do is: Make backup for my file system, cos I learn how to configure servers. So if I made some thing wrong, I want to be able to restore the default setting for my files. Instated of install new O/S.
Having trouble running backup software to network storage. (Ubuntu 10.04 Beta)
I mounted my network drive (a Netgear Stora) with curlftps (tried several other ways, but this is the only way that works). It shows up like a regular HDD which I can copy files to in the file manager, but when I try running backup software only the folders are copied.
I've tried several backup programs, some based on Rsync, as well as Rsync itself, but they all had problems. Dj Dup kind of worked and was able to copy files (archives), but kept dropping the ethernet connection instead.
I'm setting up a Backup & Media server, which will be running debian. I will setup a small HD or SSD/CF card for the OS, and a MD raid for the data drives.The total size of the raid will be either 3 or 4TB, depending. Now, I need to figure out what filesystem to use on top of this raid.My criteria is as follows:
1. Support for large files. I can't imagine anything larger than about 1.2TB, but the 4GB of, say, fat32 just isn't enough.
2. Robust. I don't want it falling apart on me; nothing too unstable.
3. (and this is most important): Good Undelete support. I got burned recently when a software glitch managed to rm -rf my EXT4 drive. All the file data is still there, but all the metadata is gone. I *DO NOT* want that happening with this. I want to be able to do a "RM / -RF", immediately unmount it, and then recover *all* of the deleted data. Obviously, when data is overwritten it's overwritten, but I don't want to lose all my metadata if a "RM -RF" happens. FAT-32 is the model I'm looking at: You can usually recover deleted files if anything happens to the drive.
So, what are my options?EXT2 looks like a possibility. EX4 is *clear out*, unless there's some nice utility/mode that keeps a backup of all deleted metadata etc.
I would like to create a full systembackup to a ISO/IMG-file. I've been searching and found mondorescue.org, but something is wrong with package for debian 6.
Rsnapshot is a software written in Perl to make backup of local and remote file system. The well proven rsync is behind this utility. rsnapshot does not need root user intervention to restore the data of a normal user. It does not take much space in your Backup server. It can be easily automated (scheduled) to make life easier. Just setup once and forget it configuration. Basically it takes snapshot of file system (or a part of) in regular interval such as hourly, daily, weekly and monthly.
This can be configured easily through a simple text based configuration file. The above task can be setup in a few easy steps in a few minutes. Two major tasks are configuring rsnapshot and openssh automatic login. To make the backup automatically, we need to automate the remote login in a secured way. This can be done through openssh tools. This scenario depicts backup of desktop (assuming that IP address is 192.168.0.100) data to a backup server. My desktop runs on Ubuntu 10.04 and backup server runs on Debian Squeeze. [URL]
Attempting to create a backup script to copy files from one file system to a remote file system.
When I try this I get:
Quote:
# tar -cf - /mnt/raid_md1 | gzip -c | ssh -i ~/.ssh/key -l user@192.168.1.1 "cat > /mnt/backup/fileserver.md1.tar.gz" tar: Removing leading `/' from member names Pseudo-terminal will not be allocated because stdin is not a terminal. ssh: Could not resolve hostname cat > /mnt/backup/fileserver.md1.tar.gz: Name or service not known
[Code].....
I know that the remote file system dir is RW and the access is working fine. I am stumped...
Does the dump command back up entire file-systems or is it capable of backing up subsets of a file-system? And is tar capable of taking device names (for file systems) as input to be archived?
This script simply deletes files older than a certain age (in this case 7 days) from a certain location; I use it to purge old backups nightly, and it works as expected:
# delete backups older than 7 days find /mnt/backup/* -mtime +7 -exec rm -Rf {} ;
The problem is, every morning I get an email with an error message something like this:
find: `/mnt/backup/subfolder': No such file or directory
When I try to copy PDF files from one folder to another folder, it give me this error: "Error while copying "2004-SNUG-Europe-paper_...log_DPI_with_SystemC.pdf". There was an error copying the file into /media/CCDCE66BDCE64F70/Backup Master/Heterogeneous_cosimulation/Documentation" "Error splicing file: Input/output error" What is the reason of this error and how can this be fixed?
I have a 7.2 GB file (VMWare virtual machine file) that I am trying to copy from its original location to the another folder OR to external hard drive...each time I try to do this, I always get the following error after the copying process reach 'exactly' 1.4 GB
Error reading from file input/output error
And I have to either Cancel or Skip
I've tried to split the files to smaller pieces but the idea didn't work as I still get the same error whenever I try to compress/ split or do any operation with this file. how I can copy this file?
I am having problems with scp during a backup operationI added a ps -ef before and after the scp operation used during the backup.The backup is a script to backup a Zimbra ServerI am including the code segment that I am having problems
Code: # DRCP Section. To scp newly created archives to a remote system if [ "$DRCP" = "yes" ]
Asuming I have two files, one large file and one small file, I want to write the smaller file to the large file without overwriting the remaining part of the larger file.
Both are binary files, and the large file can become very large, so I want to avoid copying the whole file, as that will take some time. Is there any standard Linux console utility to do this, or do I need to write it myself?
I am trying to restore my system to Ubuntu 10.10, using a system backup made with REMASTERSYS. When I reboot, I get the message: GRUB error:15 I found many threads discussing this issue, most notably here: [URL]
I have installed an application manager(monitoring application) on my linux server. Now, i need to have backup schedule for my application. The application itself has executive file to backup database.But when i put this file in my crontab to schedule the backup program it wont run!50 09 * * * root /opt/ME/AppManager9/bin/BackupMysqlDB.sh
I ssh into a ubuntu box with username "ubuntu" and I can become root without entering a password via "sudo su". How can I scp files onto this box using the ubuntu@ username? It does not allow me to do so using root@. The error is:scp: /etc/: Permission denied.
I am running 8.10 desktop on an MSI Wind desktop. Everything is on the single 500GB hard drive. I also have a 4GB CompactFlash card in the system that has a working version of 8.04 desktop on it. I would like remove 8.04 from the CF card and copy/clone the currently configured 8.10 onto it as a backup just in case I accidentally trash the 8.10 installation on the HDD some time. I'd also like to be able to update the CF backup easily periodically to keep it current with the setup running off the HDD.
The HDD is partitioned as follows.
Code: ken@pinot:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb2 9843308 800448 8542840 9% / tmpfs 1032220 0 1032220 0% /lib/init/rw
Recently I've backupped my entire /home folder on my laptop with grsynch trough ssh.Since I already had a simply text file containing all installed programs, I figured 'then, when i go from the backup trough my desktop, all my program settings will be restored if i run the txt file to install all those programs first.So i installed al programs, then i went from backup to /home on desktop, logged out and back in. Started up a random program (tried thunderbird and filezilla) and no settings were to be found.In retrospect, docky did start, but didnt have all the launchers i had on my laptop. so that could have been a clue.
Why system lags so much while working with a lot of files or with big files (copying them, erasing)? How it can be solved? I am running Fedora 12 x86_64 with ext4 filesystem
At work we have a linux based ip phone system. Now this system was buggy for a bit but we now have it working perfectly. As it is we clone this hard drive and on backup I clone it back. This works but if for whatever I'm not in the office the backup can't go through because apparently no one can read the VERY simple instructions. What i would like to do is burn a cd with a copy of the hard drive (this configuration will change once every 6-12 months at maximum) as well as a VERY minimal linux install.
The goal is they stick the cd in and reboot. The system loads the super minimal linux then it runs a batch file that clones the portion of the CD that has the hard drive onto the system then it asks the nice moron....err....person...to remove the cd and reboot All I need is a a bootable system to a bash script that will run live from cd. I can make the nice bash script that puts to the screen a nice ascii graphic telling the lucky sap it's running and to come back in a little bit to make sure it's done.
I want to move my "currently installed Debian and its all settings" to a USB flash drive. I am wondering what methods are available out there. I looked into Remastersys but it failed on my system so I am wondering if there is another method available?
I am trying to copy a file from a network resource on my Local Area Network, which is about 4.5 GB. I copy this file through GNOME copying utilities by first going to Places --> Network and then selecting the Windows Share on another computer on my network. I open it and start copying the file to my FAT32 drive by Ctrl+C and Ctrl+V. It copies well up-to 4 GB and then it hangs.
After trying it almost half a dozen times I got really annoyed and left it hung and went to bed. Next morning when I checked a message box saying "file too large to write" has appeared.
I am very annoyed. I desperately need that file. It's an ISO image and it is not damaged at all, it copies well to any windows system. Also, I have sufficient space on the drive in which I am trying to copy that file.
I am trying to copy a 7.3gb .iso file to an 8gb USB stick and I get the following error when it hits 4.0gb
Error while copying "xxxxxx.iso". There was an error copying the file into /media/6262-FDBB. Error splicing file: File too large The file is to be used by a windows user, and I'm just trying to do a simple copy, not a burn to USB or anything fancy. Using 10.4.1.LTS, AMD Dual Core, all latest patches.
I was trying to install openSUSE11.4 using the Live CD. But when it started copying the root filesystem, it hung on 15% and waited forever. The CD stopped spinning too.
i have this .mkv file of a movie which is of size 7.9GB and when i try to copy it to my external drive after some time it shows a error saying "Error splicing file: file too long" so how to copy help
my external HD's file format is vfat. and i am using ubuntu 10.04