I ran Scalpel to retrieve some files from an old HDD, what I didn't realise is how much larger the output from Scalpel is than the original disk space.
So what happened is at about 5% of files carved (pass 2/2) my disk space was full and the program aborted.
Then when I rebooted Ubuntu I can't get in unless using Recovery Mode.
I think this is because there is no disk space left. The error I get is to do with Gnome Power Manager not being configured, I never had this error before.
So I went into CLI prompt and performed 'find -maxdepth 2 | grep *.mpg' and I can see a tonne of mpg directores called ./~/mpg-1-6, ./~/mpg-1-7, etc, and they all have lots of files.
Now I have no way of deleting these, I tried using 'sudo rmdir */*mpg' and it cannot find the directory.
When I go to my home folder and 'sudo ls -al' these aren't listed.
The only way I can see them is using the 'find' command.
Can I chain the find command with a delete command somehow?
I created a directory somewhere with permissions rwxrwxr-x so that other users in my group can create files and directories in it.
I do need to be able to delete the contents in this "public" directory, but it seems that while I am able to remove any files in this directory I cannot remove and subdirectories under it.
Is there a way to remove such subdirectories owned by others under a directory owned by me?
How can I recover My deleted files in ubuntu? What's the difference between "foremost" and "scalpel"? And is there any other program(or package?) For this purpose in ubuntu? I am running ubuntu 9.10
As it says in the title I need to use the scalpel file recovery tool, or something similar to be able to recover a lost mysql storage folder.. The system crashed, and I really need these files as fast as possible. So I would love any help I could get.I have been searching in different search engines (including this forum) for an answer to my question, but I can't seem to find it.How can I configure scalpel, or any other similar application to be able to recover my mysql /var/lib/mysql storage directory. I really need these files... And.. I know, I should have taken backup
I am trying to delete a network I created on my computer (Create a new wireless network). I did so, but it is still there, visible by anyone out there in my neighborhood. I have ubuntu linux 8.04 LTS desktop edition.
I am trying to use an old box as backup server. I have tried a couple of possibilities along the lines of:
Quote:
rsync -a --delete --progress --log-file=/home/$USER/info.txt -e ssh /home /etc root@192.168.0.106:/mnt/back
The problem is it does not delete files that has been removed from my local system? I run the command as root on the local system.
(I realize I should properly not ssh into the server as the server's root but I'm having trouble with the permissions and I want to make sure everything else works before messing around with it)
I just can't stand knowing that there's a slight problem with my PC.I have roughly 12.5 Gigs of files, mostly movies that are multiple clones of a particular movie (which was an entirely different problem altogether) and I CANNOT DELETE THESE THINGS! There has to be a simple way to do it from terminal, problem is, I can't seem to find the trash directory in terminal.
we know that /etc/passwd - is a replica of /etc/passwd file and acts as a backup in any damage done to /etc/passwd file..i have observed a strange thing in RHEL 5.4....for example... if /etc/passwd has 100 accounts.. then /etc/passwd - is having only 99 accounts....when i add 101 useraccount with "useradd" then /etc/passwd has 101 accounts and /etc/passwd is having the 100th account of /etc/passwd - ..when i delete /etc/passwd and recover it with /etc/passwd - from runlevel 1 the lastly created user is not having his account after recovery.. what is the solution? this is same case even with /etc/shadow and /etc/shadow -
I tried using scalpel to recover some files that i had deleted. It went through the first pass but on the next pass there was an error message saying disk full The next time I tried to boot ubuntu I got a strange screen filled in a very large font and a message in the top corner: The configuration defaults for gnome power manager have not been installed correctlyI do not want to reinstall Karmic Koala as I may lose my email messages and other data.
I have been trying to recover data off an external hard drive using scalpel. I got the files types done set up the external folder and then I go to run the program and I get an error message "ERROR: Couldn't open input file: /dev/sdb -- Permission denied" I tried resetting the permission for the external by right clicking the drive and going through the permission menu but every time the permission reset themselves.
I can't figure out where to see when files and folders where created on the system. All I can see in Gnome and the terminal is the modification time. I also want to see the last access time.
Everytime I open a drive, an icon shows up on the desktop. i hate that! i want a clean desktop free of icons so i can put pretty widgets and all that other junk . how do i stop this from happening?
the permissions for my home directory were accidentally changed from 'access files' to 'create and delete files', and I changed them back, but ever since then I am not able to change any preferences/settings at all. power management, themes, panels, emerald, anything. my user account is supposed to be the administrator, and all the user privliges are checked. how to get control of my computer back?
I am trying to set the default files created by www-data to 774 (umask of 003).
I go to
Code: /etc/apache2/envvars
and have set these parameters. NOTE: The only thing I actually changed was adding the umask 003 at the end.
Code: # envvars - default environment variables for apache2ctl # Since there is no sane way to get the parsed apache2 config in scripts, some # settings are defined via environment variables and then used in apache2ctl, # /etc/init.d/apache2, /etc/logrotate.d/apache2, etc.
I used Deja dup to back up my Ubuntu eventually I reinstalled ubuntu and went to restore from the files that Deja Dup created but when it said it was done there was no change. I can go into the difftar.gz files and see my folders and files and even tried extracting them from there but when I open them all the information isn't there.
I haven't restored Ubuntu till now so I don't know if I'm doing it wrong or not but its REALLY important that I get some of these files. am I missing a step, or what?
I'm working on a ~1 TB disk that was loaded with all kinds of images and documents that lost it's HFS+ partition table. The person for which I'm doing the favor of running scalpel says it's likely there's 90GB of stuff. Somehow, the disk got relabeled/MBR changed to some FAT variant that works on the whole Terrabyte.
Attempts to recover the partition info failed.My first try with scalpel finds more than 90GB of image file headers alone and that blows through all of my storage. Of the headers found and recovered as images, a simple test shows most of the image files are broken. The cluster size option does not work if I use it by itself. It errors out before it gets going.I want to speed things up and skip the countless broken image files.
I'm a newbie despite using Ubuntu most of the time for nearly 3 years. There are some files which are created automatically in one of my ntfs partition. The files are khq, khp, kht, an autorun inf file and others. They seem to have been created while I was using ubuntu and even though I delete them,they appear again later. I have googled and have found few information that the files are malware. I will like to know if there is a known issue and solution. This is the first time i'm posting a thread.I hope i have post it at the right place and if not,
I created a little bash script for renaming files from a folderEvery time i hv to put that bash script file (rename.sh) in folder Is there any ways i will call (rename.sh) from terminal without moving rename.sh into any folder ?ne More Question : Whenever i run any .sh file automatically one .sh~ file created it is my programing mistake or is it exists ?
Is there a simple command to copy files that have been created within the past 2 hours?I've been looking through the man pages for unisonrsyncfindcpand I can't find anything I'm looking for.All I need is a simple command.Code:Copy folder a to b if created < 2 hours.
Currently have access to a VPS where we are running a small game server on ubuntu - the problem is that it is a multi-user environment, so when one person restarts the server process, all files it creates are owned by that users name and group. I have created a group called 'game' and added both users to it, but I need to know how to make all files in the game server's directory to be r/w/x for the group 'game'. Currently, I have a script that chowns and chmods all files recursively on startup, but I'd prefer not having to do this.
Using Ubuntu 10.10 (installed via mythbuntu) I'm unable to read or see files/directories created under Ubuntu. I think it started happening after a reboot to Windows. Some of the directories created under Ubuntu have disappeared completely and some of them produce the following error: /media/storage/videos/Kids Videos$ ls ls: cannot access Justin Bieber: Input/output error ls: cannot access Octonauts: Input/output error rest of directory is seen fine...
Same on some files: ls -l ls: cannot access Dirk Gently.mp4: Input/output error ls: cannot access Dirk Gently.nfo: Input/output error ls: cannot access Dirk Gently.srt: Input/output error ls: cannot access Dirk Gently.tbn: Input/output error ls: cannot access Human Planet: Input/output error ls: cannot access Russell Howard's Good News: Input/output error ls: cannot access The Planets: Input/output error ls: cannot access Lost Land of the Tiger: Input/output error total 300160 .....
Just to make it worse I copied more data onto the disk from windows so may have lost some it completely. It there anyway I can repair this? When trying to check under Windows it says it can't. Some of the missing files can be reloaded but others can't. Ran chkdsk /f under Windows XP. Some files have reappeared, but there has been a lot of unrecoverable files lost. Conclusion: Ubuntu 10.10 is badly broken for writing to NTFS. As I would like to share between Windows & Ubuntu using the external disk, I'm not sure what to do at this stage.
I'm dipping my toes into some bash scripting and was wondering if there was a way to delete a file not based on how old it is, but rather how many other files are currently in the folder... or something to that effect....
What I'm doing is creating a script to back up a folder nightly. I'd like to keep a maximum of 3 backups. However in case the script for some reason fails to run one night (computer turned off possibly) I don't want to set the condition for deletion to be the date.
I know that if I run:
Code: find /path/to/files* -mtime +3 -exec rm {} ; that it will delete everything older than three days. -atime and -ctime don't seem to be what I"m looking for... is there another command I can use to achieve what I"m trying to?
I have hundreds of MTS and AVI files since 2000 and would like to rename them in the following manner based on the date created: DD-MMM-YYYY HH.MM.SS_X; where X begins at 1 and increments by 1 if there are dublicate date/time stamped videos.
Ex: 19-Nov-2002 08.12.30.avi, 19-Nov-2002 08:13:30_1 and 19-Nov-2002 08:13:30_2
Someone previously wrote the following script for me, and it works great for photos. It uses EXIV2 to get the image date created info. I have tried to understand the script, but am struggling. The video files I have can use the date modified since I have not modified them since I filmed them.
#!/usr/bin/env python import os import stat import pyexiv2 import time directory = '/home/david/Desktop/test' [Code].....
I guess in most cases when extracting a tar achive ,we will get a directory with the same name as the archive file but different suffix. but in some unlucky case, as I met today, after extract a tar bar I find lots of files spread in the working directory, which is really nuisance.so what I want to learn from you is that how can I move thoes newly created files ? I know it should be some "find plus rm" fancy approch there, but I don't know exactly how.
With the find command it is easy to find files that have been modified or accessed within a given period. When a file is created, the acesss time is the same as the modify time. But as soon it is accessed (read), the access time changes, but the modify time does not. I need to find files that been accessed at all, ie. files which have access time newer than modify time. How do I do that?
I've created a script to FTP some files from a RHEL box to a EMC NAS and the script works because if I run it script manually it runs fine and transfers the files to the NAS but when I schedule it in the crontab the script runs but it doesn't transfer anything and I pipe the output of the cron job and I see message about 'passive mode refused' and lib: not a plain file, sys: not a plain file, etc...I did a vi ~/.netrc which contains the a single line of "machine xxx.xxx.xxx.xxx login ftpuser password xxx"
My script looks like: /usr/bin/ftp -i xxx.xxx.xxx.xxx <<EOF cd ftpdir
Kernel 2.6.21.5, GNU (Slackware 12.0). Bash 3.1.17.
I want to search an entire subtree of /, in the file system, for all files, with extension html, created on the hard disk. In addition, these have to be the last five created. I think I could split the problem into two parts: (a) Forget about the last condition. Then this is a job for the find command. (b) Sort the output of find using the date as the key, then use 'head' to print the desired output. But even two such simple steps are enough to justify the writing of a shell script. And here lies my weakness.
My script writing knowledge is rudimentary. What's the final purpose? Well, I lately saved four or five LQ pages onto disk containing information I consider valuable to me. But I don't exactly remember where on the disk. Then: either the problem posed is really of a very simple nature or it is not, in the latter case a script being mandatory. One of the algorithm drawbacks (the one described above) is that find may be running a great deal of time. My machine resources (RAM and CPU speed are low) are scarce and there possible are a large number of HTML files on the disk.