I have a lot of data on a nfs store external to an Ubuntu machine. I've had problems with the storage becoming full and have spent a very long time deleting files to get absolutely nowhere! Over 25GB of deletions on the 80GB disk has only cleared 1.5GB of space...
You can see below the hidden .snapshot directory, the disk usage for the nfs-store is coming out at 97GB when the disk is only 80GB in size, and the two directories I want on the disk are about 22GB in size.The .snapshot directory appears to have been made on Friday, I'd like to know if I can:
1) find out what command ran to create/update it
2) re-run the command to update the directory or remove the directory
Code:
ideasadmin@ideasadmin-desktop:/nfs-store$ ls -la
total 20
drwxr-xr-x 5 500 500 4096 2010-07-16 17:35 .
For a while my root partition has been filling up for no apparent reason. I I have been deleting things to find out it fills up again in a mater of days. To make it more 'interesting', there is a disparity between what I get from df and what the du command is telling me. After dismounting the other file systems and turning off applications, this is what i get:
du claims that I'm using 29G on that partition, which sounds about right (this is my OS and basic /home partition, everything else is elsewhere). df on the other hand is telling me here that out of 69G, 64G are in use with only 54M left.
Noticed that one of my partitions on openSuse 11.1, mounted as /usr, filled up suddenly with over 20 GB of something. Tracked it down to /usr/bin, in which the X11 directory contains 2353 items and another X11 directory, recursively at least 11 times that I've expanded without reaching the last X11 directory. The newest files in each directory are dated 11/24/10.
Any idea of what's happening? How to stop it before it fills the disk?
I'm running an OpenSuse 11.2 box at home which updates a mysql database 4 times a day then posts 34Mb to my website. It's exactly the same amount of data each time, and my scripts*TRUNCATEs then rewrites the database with the latest data - so the database size remains the same.
There's a problem (I think with the script) however which means that everytime the script runs, approximately 34Mb of space on my hard disk is mysteriously taken up. I'll have to get that script fixed...
I can't, however find the files which are eating up my diskspace at the rate of 140Mb per day. I've done various searches (mainly with Dolphin) including hidden files immediately after running my flaky script and looking for any files created/modified in the previous few minutes when the 34Mb has disappeared.
There are LOTS of files in /proc (which I don't think is actually on my HD right?) and also in /var. There's nothing much in /tmp (on a separate partition) or any log files that I can see. The box has been running this script daily for the last 6 weeks so I'm hoping there's a load of files somewhere I can get rid of, then fix my script.
I have a directory /var/log/data its about 80 GB,It filling up quit rapidly.I don't have much space left in the system them So i will attaching another External HDD.My question is that i need to mount /var/log/data to new HDD.So i have old data and pulse new coming up.I don't want to copy data from /var/log/data then mount new HDD to /var/log/data you know what i am taking about is there a simple way like linking or any other.
I have automated backups running on my ubuntu box using rsnapshot (rsync) basically following this tutorial. My concern is how to restore if everything is lost. Since it does not seem that I can backup the entire drive and that I have to choose individual folders and some like proc/ cause problems.
Currently in my rsnapshot.conf I have (see below). Is there a way to just clone the entire drive? Or should I not do this? Questions:
1, Can I backup in a way such that it is a clone of the drive so that it can be swapped with the current drive?
2, If not yes to clone, If I had a total drive failure would I install a basic ubuntu and then replace all the files with the backups?
I have a directory containing a lot of result log files (directory size ~ 4GB), and a set of processes running that keep on writing to these files. To be able to correctly analyze the results at a later time, I want to copy the whole directory to an archive destination, and I cannot stop the processes.
I want a copy of the directory as it was at a particular point of time. As the size directory is huge (which means it takes about 40 seconds to copy) and some of the files are being written to, a normal cp -r does NOT give me a snapshot at a particular point in time, rather a snapshot of files spread over some 40 seconds. This is not good enough.
Is there a way to get an exclusive lock on the directory and all its components while copying?
I have been using VMware Player for some time to host Fedora VMware images on Windows XP. I have been using Fedora 11 and 12 (both 32 and 64 bit) and recently started to use Fedora 13.
I use as a base the images provided by thoughtpolice. http://www.thoughtpolice.co.uk/
I usually install VMware tools and also keep the images updated (yum update) which sometimes changes the kernel.
I have recently had problems with the snapshots not having a network when I restore them. So far I don't have the problem with Fedora 11 and do have it with Fedora 12 (but used not to). I do have it with Fedora 13.
In each case the problem goes away when I uninstall the VMware tools and comes back when I install them again.
One of the symptoms is that SElinux complains about not being able to do something with /var/run/vmware-active-nics.
It looks to me that something is incorrect in the actions being taken when the snapshot is being restored. It does not happen every time and sometimes the network restores itself.
The network can be restored by rebooting the image.
I work for a company that makes portable devices running Linux and I was recently asked to make the underlying file system read-only for "security" purposes. Since the distribution is based on LinuxFromScratch, I know that very little writing happens at run time. So, even if the device runs on a usb flash device, I doubt that putting the root file system RO will be that beneficial. I am actually more concerned about a process actually breaking because it cannot open a file in RW mode than a process going rogue and filling the root file system with log files, etc. I'd really like to ear what kind of advantages disadvantages there really is with read-only file-systems.
Just lost my harddrive. Bought a new one, installing Ubuntu 10.10. I would like to installa a couple of programs after my installation, wanting to use the machine to develop programs - so I am going to install Eclipse and Glassfish and tomcat and such.
When I am done with that I would like to take some kind of snapshot and burn to a disc. So if a crash comes visiting me again, then I am able to install my basics. This would save me from those boring moments in live.
Can I take a 'snapshot' of my installation, Could I then restore this in some easy way ?
I use the RH Linux 5 and it contains some tools/software like swig, python etc. But the version of these tools are lower than what I want. So I am going to install the tools with higher version on it. Before I do this I will make a snapshot first so if something goes wrong, I cat restore the system.I've heard the most easy way to do this is make a snapshot. Someone know how to do this?
I started to use Debian Live about three months ago. I can't get snapshot to work correctly. Each time the system boots or shutdown, the system over-writes the persistent file. I'm currently have to use the live-snapshot command to make a snapshot. Make a copy of the file for a backup. Then I have to boot my system and make a copy of the backup and rename the file the correctly. Turn off my laptop (not shutdown). then restart it for the persistent to work. Must be a easier way.I have three partitions: debian-live (/dev/sdb1), live-sn(/dev/sdb2), and home-rw (/dev/sdb3) on a 8 gig usb flash drive.
I am trying to use perl to run a program using the eval command but the program runs infinitely, i just need it to run basically for one second, stop then give me the output. I tried using fork but it does not really work. The child process is not being killed.
my $pid = fork; if ($pid == 0) { my $results = `ngrep etc...`;
I have met a Bug in the debian squeeze with the kernel 2.6.32-5-xen-amd64 and Xen 4.0.1 I have try with two differents environment, but i have the same result. I haven't this bug when i use just the kernel 2.6.32-5-xen-amd64 without the hypervisor Xen 4.0.1 and on the debian lenny with the kernel 2.6.26-2-xen-amd64 + Xen 3.2.1. When i run a script who create a snapshot of a LV, i have this Bug error: Just after the "lvcreate -s -n Snap -L 1G /dev/data/svsqueeze" in the script
Recently, I've tried to create a snapshot of my /root folder in 10.04 (I remember to have done so a couple of times without trouble).I used the commandHTML Code: Code: lvcreate -L10G -s -n rootsnapshot /dev/server/root Without a hint of HDD activity, Gnome (?) hangs up - I can move the mouse, and the clock is moving on, but I can't click on anything. No HDD activity, and this goes on forever (after a while, the whole system locks up). This happens each time when I try to create a snapshot.
after I upgraded to Goddard I am experiencing a strange thing. Whenever my computer shuts down abruptly due to power failure or some other reason, it shows the snapshot of the last session for a second or so just after the splash screen when I start it again.
I want to experiment with pacemaker, and for that I'd like to start kvm virtual machines with the snapshot option, so that as soon as I stop the vm, all changes are gone and I can start over. Since I couldn't figure out how to setup networking (vm - vm and vm - public) with a kvm commandline without disturbing the Network Manager, I used the Virtual Machine Manager (VMM) for this.
It works now, but I cant see how to use the snapshot option with the VMM. On the other hand, I cant see how I start the resulting vm from the commandline. When I look at the process list, I see the command with these network options:
I can cause the kernel to panic immediately with the following command. lvcreate --snapshot --name Snap --extents 100%FREE VolGroup00/LogVol00 The last line of the panic message is "<0>Kernel panic - not syncing: Fatal exception" If I create a snapshot of any other volume it works just fine. It only panics on LogVol00 which is my root fs.
I'm running 5.4 after update from 5.3. It didn't work with 5.3 either. This is a 32-bit guest running in VMWare Server 2.0.1 which is running on FC10 x86_64. I've tried the guest in both UP and SMP (2 cores) modes and observed no difference.
I'm booting to Kali 2.0 live from USB and wanted to add persistence, but I can't get OpenVAS setup. The setup script runs and eventually fails due to no more disk space. Here's my df -h output:
Here's gparted:
When the setup runs it fills up root (/) which is only 872mb. This is a 16gb USB so I'm wondering if there's a way to allocate some of the 11gb of unallocated space to root? I couldn't tell how to do this with gparted, would I need to build a custom Kali iso or something with different partitioning?
I have a pre-printed form that I need to fill in. Is it possible to scan it, fill it in on screen, and then put the original form in the printer and get things to print out on that original form? I know that I can scan the form and fill it in on screen and print out on a blank piece of paper, but I need to use the original form
My /tmp directory is being filled up with root-tmp.####. I suspect they are being created by bastille-tmpdir-defense.sh, but they do not seem to get removed.
anyone knows of a software to take a snapshot of part of the screen and save it as a pdf file. I have KSnapshot installed but this only allows saving image files.
I am relatively new to ubuntu/linux and although I have made some good progress on my server I'm struggling with a few points. I am sure what I'm about to ask has been covered in some other thread/guide but I just cant pick out the missing piece hence my direct question;I have 2 computers; a server and mediacenter.on the server I have installed nfs:
Code: sudo apt-get install nfs-kernel-server nfs-common portmap then
Just today I started getting notices about lack of disk space on my system. After much digging I found that .xsession-errors and .xsession-errors.old were taking nearly 70GB of space combined. The primary message I'm getting over and over again is: SSL_Write: I/O Error I have been unable to figure out what's causing this error.