I did a backup of the ssd on my eeepc using the following command from a Linux Mint on a USB key:
dd if=/dev/sda1 of=/media/disk/eeepc_save/SYSTEM/system.bck
(/media/disk in an external USB disk)
I deleted the ext2 partition using GPartEd on live USB key and created it back. I rebooted Linux Mint and restored the filesystem using the opposite command :
dd if=/media/disk/eeepc_save/SYSTEM/system.bck of=/dev/sda1
I mounted /dev/sda1 and when I "ls" the root directory, I get several "NFS stale file handle" messages concerning directories (/dev and other). I tried "e2fsck -y", had a bundle of corrections that resulted in the deletion of the directories. I don't use NFS. I did the same for the user filesystem and had no problem (it's an ext3 partition). The two filesystems are the ones that came with the original Xandros installed on my eeepc and that were mounted with union-fs.
We have Linux server in our environment for application development. In this server we mounted so many NFS share from Storage. Past few days we receive this error in syslog kernel: nfs_statfs64: statfs error = 116 Some user fased this error "Stale NFS file handle"
Server info OS= RedHat Kernel = Linux hostname 2.4.21-47.ELsmp #1 SMP Wed Jul 5 20:38:41 EDT 2006 i686 i686 i386 GNU/Linux
I get this error which means I cant visit websites. I cant rm, cp, mv, vi, ... this file. How do I regain the ability to browse the internet? Is there a way I can create a /etc/resolv.conf2 and have my system use that instead?
I currently have a home network setup so that my main machine shares it's external hard-drive via NFS. This has been working perfectly for months, however I just got a new laptop, installed openSuse 11.3 x64 and set everything up. Now there is two folders on the external network mount that won't let me do anything and always just return Networking: Stale NFS File Handle. The system still works fine under my old openSUSE 11.2 x86 laptop. I have tried unmounting the drive from the laptop, restarting the NFS client, and restarting the NFS server on the main machine. None of these have made a difference.
It is only these two folders that are effected. Everything else works just fine.
I've got something showing up in my /mnt directory that I can't figure out how to get rid of. If I try to delete it, I get, "ERROR: Stale NFS file handle". I've tried googling it, but the only solution I can find is "remount and then unmount your NFS server". The trouble is that I don't have any NFS servers - it was a mountpoint for a squashfs file. Trying to remount the squashfile just gives the same error message. My best guess for how it was created is that maybe the file was deleted while mounted. Surely there is a way I can get rid of it? It stuffs up my system by e.g. preventing find from working. I'm running Puppy Linux 4.1.1, although I suspect that is irrelevant.
When using rsync to copy from one NFS mounted filesystem to another mounted NFS, rsync displays "stale NFS file handle" while attempting to issue a chmod after each file is copied. The files appear to be written successfully but the owner and group show up as "99." The source file system is mount USB ext3 drive and the target is a Buffalo TeraStation TS-XL/R5.
I have a 20TB filesystem, xfs formatted. The filesystem has been mounted with the inode64 option, and I now need to NFS export it. NFS doesn't like the inode64 option at all. The NFS clients cannot access any of the directories with inode numbers exceeding the 32bit limit. They get the "Stale NFS file handle" message. I have tried to attach the filesystem to a RHEL5.3 system, and after turning on the no_subtree_check option in /etc/exports on the server, it all works fine. No changes were needed on the clients.
The problem is that I need to get this to work on a RHEL4.4 system. Unfortunately I cannot do any test on that system yet, I then did a quick test on a RHEL4.3 system... and it didn't work. Even using the no_subtree_check option was on any help. I am afraid that this will not work on the RHEL4.4 system either. How to get the inode64 xfs filesystem NFS exported on a 2.6.9 kernel?
I've got a problem while accessing the /etc/sysconfig/network-scripts/ifcfg-wlan0 file, CLI's throwing: "Stale File Handle" - there's no access to this file. Problem is the same regardless wlan0 interface is up or down.
When I run lilo (/sbin/lilo), it messes up my /boot partition. Next time I try to mount it after running lilo, I get an error: "mount: Stale NFS file handle" (I define -t ext2). My /boot partition is ext2, mounted locally, and not nfs. Then I do fsck /dev/sda1, and I get several: Free blocks count wrong for group #0 (7665, counted=5063). Fix<y>? I say yes to all and it works normally afterwards. This happens only after I run lilo. Lilo is installed in MBR.
Here is relevant configuration: Code: root@darwin:/home/cabrilo# cat /etc/lilo.conf append=" vt.default_utf8=0" boot = /dev/sda
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104422 83 Linux /dev/sda2 14 144 1052257+ 82 Linux swap /dev/sda3 145 3432 26410860 83 Linux /dev/sda4 3433 19457 128720812+ 83 Linux
I was following the relatively simple instructions here for setting up a LAMP system. After having installed Apache2-related applications, I ran
# a2enmod expires # /etc/init.d/apache2 restart
That worked fine. Then, a little later, after having set up a virtual host for my project website, and after installing PHP, MySQL and setting up a Mail Server with Exim, I rebooted and started getting errors when trying to start Apache:
Starting web server: apache2apache2: Syntax error on line 185 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/mods-enabled/expires.load: Stale NFS file handle
Now it seems as if there is nothing I can do with that file:
# rm -f /etc/apache2/mods-enabled/expires.load rm: cannot remove '/etc/apache2/mods-enabled/expires.load': Stale NFS file handle # cd /etc/apache2/mods-enabled/
[code]....
Is there anything I can do to refresh the NFS index so that it finds or removes this file? I'd be happy to just get rid of it. At the moment, I can't start Apache or anything because of it.
I have Puppy installed on an old laptop and one way or another I ended up with a file inside /usr/bin that has the Stale NFS file error. I've tried to look around for a way to fix it but all places I've looked have only been for the situation where it's on a drive you're able to unmount. At least I think they were. I certainly don't know what I'm doing well enough to know for sure. Obviously restarting the computer has been tried as well as attempting to unmount things, but I can't unmount the drive that is running the unmounting.
I'm trying to setup a small network between my old and new laptops to transfer my personal data. They are now linked with a crossover cable and they see each other.The old one has a dual-boot setup with WinXp and Ubuntu 9.10.The new one with Win7 and Ubuntu 9.10.I tried samba but it was very slow even using Windows in both computers: maximum transfer rates were about 1,5 Mib/sec.I tried SSH using ubuntu on both pcs and it is reliable and much faster, 5 Mib/sec. But I wanted more...I installed the NFS server on the old one and exported the NTFS partition where my data resides with sync and ro options.
I installed the NFS client on the new one and i'm able to mount the remote partition.Now, when I transfer my files I get very high speed, more than 10 Mib/sec but after a while I get a "Stale NFS file handler" error but I really didn't touch any file in the old pc and the connection is always up.Searching on the web I found that NFS had some troubles exporting NTFS partitions in the past but should be fully compatible with them since the last versions of ubuntu.
This script simply deletes files older than a certain age (in this case 7 days) from a certain location; I use it to purge old backups nightly, and it works as expected:
# delete backups older than 7 days find /mnt/backup/* -mtime +7 -exec rm -Rf {} ;
The problem is, every morning I get an email with an error message something like this:
find: `/mnt/backup/subfolder': No such file or directory
I am new to Linux. I am using tar to backup my emails to a server. I would like to automate this process to routinely backup my emails periodicly, however, i keep running into a problem: I start in the dir I would like to create the tar file (dir size = 240MB). I enter the following command
tar cf bup.mail.llc.tar "/Users/d/Library/Mail/INBOX.mbox/Messages" file size = 234MB
When I would like to backup my emails into the previously created tar file I use the following command:
tar uf bup.mail.llc.tar "/Users/d/Library/Mail/INBOX.mbox/Messages"file size = 462MB The backup command works, except the size of the original tar file grows, around twice the size. When I extract the updated tar file (file size = 462MB), the unarchived file is 240MB the same size as the original directory.
Why does the size of the tar keep growing each time i perform 'tar uf'?? I don't understand this
O/S: Fedora 12 I am newbie in linux. What I want to do is: Make backup for my file system, cos I learn how to configure servers. So if I made some thing wrong, I want to be able to restore the default setting for my files. Instated of install new O/S.
I'm new to php and need some pointers to worth while documentation 'cause I'm getting nowhere I want to make a simple html form that allows me to submit a csv file so that I can work on it.The problem seems to be that if the file is not in the root of the (web) application it won't work.The form doesn't seem to send the path with it so I am unable to (1) know where the file is, I just get the name of the file and (2) I couldn't access the file anyway as it's outside of the apache environment.Is there a way to up the file to memory? How would you do this
I work for a school consulting company.We helped a school deploy about 1500 computers.The computers have windows XP but we have been using G4L for the restore partition on the drives.So far the software works great. We did however run into a problem in that many of the computers we deployed are missing the restore partition. The reason they are missing is long and convoluted and not really that important. What I have been charged to do is try and fix the restore partition problem. One solution that I had, which im not even sure if it will work, was to backup the recovery file, that g4l created, to DVD and write a basic script to recreate the partition and then copy the file over. This process would need to be as automated as possible since this disc will be inserted by the end user(the students). The backup file that g4l created is 5.9GB so it wont fit on just one disc and Dual layer discs are too expensive to use for this project, so the file will either need to be compressed again (not sure if that's a good idea or not) or split across two DVD's.
I have searched the forums here and I was not able to find anything to fix this problem. I was able to find some info on splitting files across two discs but im not sure how to use that to fix my problem.
It's always a good to backup a configuration file like sources.list before you edit it. To do so, issue the following command: sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup Where does it backup to and how do I access it? I want to put the backup on removable disk or upload it
I am IT Manager. Currently I am responsible for a mostly IBM shop, running a mix of AIX, OS/400. VMware, and Windows.
My P-series running AIX and Oracle and my AS/400 both back up to tape drives daily. But my VMware system is not being backed up, the virtual machines are, but my tape drive just broke down.
I just ordered a server grade machine with an i5 CPU, running 3 2TB SATA drives in RAID5, this is going to be my new backup server.
I am hoping I can put Linux on it, and some sort of open source file replications, real time backup software that will back up my Windows servers, and my Windows desktops in real time.... any sort of animal exist?
Has anyone else experienced issues with this option? Using the tweak tool from Malcolm's repo. If I set it, it works in the current session. But after logout, I can't login again. At login the desktop appears briefly then closes back to the login screen.
I am administering a live web server i want to keep a backup of the access log file without disturbing the server performance. can anyone guide me how to to this. the size of teh log file run in GB so i will need to take a daily backup
Does the dump command back up entire file-systems or is it capable of backing up subsets of a file-system? And is tar capable of taking device names (for file systems) as input to be archived?
I have installed an application manager(monitoring application) on my linux server. Now, i need to have backup schedule for my application. The application itself has executive file to backup database.But when i put this file in my crontab to schedule the backup program it wont run!50 09 * * * root /opt/ME/AppManager9/bin/BackupMysqlDB.sh
I am having problems gaining access to the BackupPC admin web page. The error:
Quote:
[Tue Feb 22 16:43:59 2011] [error] [client 192.168.0.2] (13)Permission denied: Could not open password file: /etc/BackupPC/htpasswd [Tue Feb 22 16:43:59 2011] [error] [client 82.30.227.113] access to /backuppc failed, reason: verification of user id 'myhtaccessuser' not configured
Why is this occuring, is there anyway of getting around this? I just cant remember originally on my old system how I set this up.