My site was recently hacked and a line of <JAVASCRIPT> was inserted into all my php files. Is there a way to pull just that one line out of all the php files on the server? I was thinking of using a grep -iR <CODE> *.php then piping thru sed
I'm working with a rather large file of data taking from a tracking program on my phone, and trying to pull only the longitude and latitude from it. Any given line in the data looks more or less like this:
Which is a lot nicer, but I would prefer not to have to hand remove the non-number characters by hand since there are thousands of data points. what I could do to get it to just be longitude and latitude in 'number number' format?
I have to delete a certain line of text from the a textfile via ubuntu's shell scripting.I have done research, and it seems that most people advocate the usage of sed /d option. sed makes does not edit the text file. Hence, most options I discovered involved the use of a temporary variable/textfile and then overwriting the old file with the temporary new file. Is there anyway whereby I can bypass the use of temporary storage containers? I hope there is any magical combination of commands to edit the file directly.
I would like to append text to a file. so i wrote in bashecho text >> file.confHowever it doesnt leave a new line. So i can only do this once. How do i add a new line?
I am looking at how to add particular text to a file in bash.Here is what I am trying to do:In the /etc/grub.conf file, I am trying to add "audit=1" (without the quotes) to the end of the kernel line...such as:kernel /vmlinuz-2.6.18-194.el5 ro root=LABEL=/1 rhgb quiet audit=1
As there are a few different lines in this file, I am only looking to add the "audit=1" to the above line via a bash script.
I have a lot a folders, each named by a number, and in each of these folders I have a specific file (stddev.dat) containing a single line (of numbers) I need to have a single file with each line being one of the stddev.dat (no matter if it is sorted or not), and also I need to add at the begining of each line the number of the folder it comes from.
I 'm no bash expert, and the "add at the begining of the line" is a bit of problem to me". Here is what I've come up with so far, just to put everything in one file, (and also if you know a better/more elegant way to do the same thing I've done, I'm listening)
How can I list the following with grep. I want to extract 2 lines fron a text file The fixed known part if it exists will static text and the text line after it will change.
A sample file . . textline1
[code]....
If the fixed part does Not exist how can I return error code 1
a sed command to add a text before line number in text file? I have text file with 500 lines, and i want to add 3 more lines with text after line 300, OR before line 302, isn't no problem.
I have two txt files containing x and y coordinates: xcoord.txt & ycoord.txt. I need to open them; read them line by line to get each coordinate; then each time I need to update Xs and Ys parameters inside another file called "dc.in" with the grabbed values.
Finally each time I need to run two exe files ( dc_2002 and st_vac) and produce corresponding output for each Xs and Ys ( dc.in is an input file for this exe files)
I have written the following code but it does not work:
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
I have a folder with many many files. e.g html, docs, excel sheet, script etc. Now I want to find {using grep command}a certain word in that folder/directory and delete it in all the files and scripts that have it.
For example, I want to delete the word /testing (with the slash) in all files in a directory.
I am looking for a way to keep a log and make if then statements if a line exitsts in the log. I also am looking for a way to make a simple loop, like goto line number, and I also am wondering how to add/remove bits of text from a text file (plugins line in server.properties)
bash 3.1.17(2) I'm trying do write a shell script which must operate on each line of an ASCII text file. So, all the code must be inside a loop, and inside the loop, the first thing should be to read the next line from the file. I have the bash read command. But it reads from stdin. Any way to make read from a file?
I'm a bit new to Python programming and hoped that someone might be able to help with a problem I'm having. What I essentially want to do is to combine two text files line for line. I know how to do this in a bash script so to give you a better idea here's the code for that:
Code:
This is basically for adding on values to the end of a CSV file that uses ';' as the delimiter. So say file1 said:
And file2 said:
Then running this command would create merged_file1_and_file2 which would be:
The code I'm using at the moment is:
Code:
As I'm sure any experienced python programmer will see, this prints out the first line of the file "csvraw" and then all of the lines of "stamps" and then the remainder of "csvraw".
What I'd like to do is something like: (pseudo code, I know it's not python ;-))
Code:
Is this possible? I've tried googling and my Python Pocket Reference hasn't been much help. I've looked at pickling but that doesn't seem appropriate.
I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. how to do this either with gnuwin32 command line tools or in linux shell?
How can I replace one instance of a word in a text file with a piece of text that spreads several lines ? I know sed or awk is the way to go but don't know that how I can introduce new paragraphs using these tools
my webpage PHP below. I would like to enter "Hello, in the main inputbox field" below (You are editing: textfile.txt) and click "SAVE" directly from command line.
Sort of : wput_php "hello, in the main inputbox field" click save, and here is it. the text would be uploaded.
I am basically trying to remove duplicate words in my <title></title> tag after I got hit by Google Panda. I have around 750 .html files and it will be difficult for to me remove one by one. I am looking for a way to remove only from within <title> </title>
Example of a duplicate title I have:
Code:
<title>Pasta, Pasta Recipe and Pasta Guide</title>
I dont want to replace those words anywhere else in the file except for within the <title>
I have large text files with space delimited strings (2-5). The strings can contain "'" or "-". I'd like to replace say the second space with a pipe. What's the best way to go? Using sed I was thinking of this: