General :: Deleting A Specific Line Which Contains 2 Patterns?
Apr 7, 2011
i have a problem about deleting a line from a text file which contains two specific patterns. i am using "sed -i "/$name/ d" peop.txt" but i must use one more variable which is surname.
and this is the code of text file. and the second question when i use "/$name/ d" it deletes not only the names which are macthing with $name but also all words that contain $name. so how can i fix these problems_?
I just got an email from google saying my site contained malware. It has a line in it: "<script src='http://whitepix.info/3'></script>". I've noticed its recursively in all my .html and .txt files in my website. Can I make a linux script to run that will go through all my .html and txt files recursively and delete that line from them? I don't know how it got in all of them.
I'm trying to use sed to search for a certain 'primary' pattern that may exist on several lines, with each primary pattern followed by an --unknown-- number of 'secondary' patterns.The lines containing the pattern start with: test(header_name)On that same line is an arbitrary number of strings that come after it.I want to move those strings over to their own lines so that they each are preceded by their own test(header_name).e.g. Original file (mytest.txt):
apples test("Type1", "hat", "cat", "dog", "house"); bananas
I have two files, file1.traj and file2.traj. Both these files contain identical data and the data are arranged in same format in them. The first line of both files is a comment.
At line 7843 of both files there is a cartesian coordinate X, Y and Z ( three digits ). And at line 15685 there is another three digits. The number of lines in between two cartesian coordinates are 7841. And there are few hundreds of thousands of lines in a file.
What I need to do is copy the X Y Z coordinate (three digits) from file1.traj at line 7843 and paste into file2.traj at the same line number as in file1.traj. The next line will be 15685 from file1.traj and replace at line 15685 at file2.traj. And I dont want other lines (data) in file2.traj get altered. This sequence shall be going on until the end of the file. Means copy and substitude the selected lines from file1.traj into file2.traj.
I tried to use paste command but I cant do for specified line alone.
Here i showed the data format in the file. I used the line number for clarity purpose.
I am trying to compare a list of patterns from one file and grep them against another file and print out only the unique patterns. Unfortunately these files are so large that they have yet to run to completion. Here's the command that I used:
Code: grep -L -f file_one.txt file_two.txt > output.output Here's some example data:
I am doing molecular dynamics where I have to edit files. I have looked at tutorials for grep and sed but can't find my solution. The files produced in my simulations look something like this:
ATOM 1825 NE2 GLN 112 113.646 27.895 14.456 ATOM 1826 HE21 GLN 112 114.020 26.957 14.490 ATOM 1827 HE22 GLN 112 112.649 28.039 14.388
I am starting to learn shell programming, but I couldn't accomplish a simple thing that can be done with sed, awk.Its simple, I am dynamically getting an IP number, and a rule name, that I need to save them on a single file on a specific line number.I found how to enter a value on a specific line, but couldn't understand how to enter the second value.
This is a continuation of the post in [URL].... . What has happened is that I have a huge log of 600 MB + and do not know from when the data is in there.
So I need to know if there is a way to get just the first five or ten entries listed as one can do using head, tail etc. I did try just using head but it was unsuccessful. The second thing would be deleting some days, weeks or months data from the existing sqlite log. I did see URL... but in my case I am deleting old entries in the sqlite log and also do not know if sqlite makes backups of the data or not ?
I have a log file that contains information like this:
---------------------------- r11141 | prasath-palani | 2010-12-23 16:21:24 +0530 (Thu, 23 Dec 2010) | 1 line Changed paths: M /projects/ M /projects/
[code]....
what i need is, i need to copy the data given between the "---" to seperate files, for, e.g. the first set of data between the "---" should be in one file and another set of data in another file.
I'm new to the shell scripting. can any one help in creating shell script for matching the content of the specific variable with file. it should remove that line from the file if line is containing same value as variable and keep the other content as it is.i used grep -v for accomplishing the same. But grep will remove the pattern which is similar.For eg. Assume file "test" contain datas :aaffif i used grep -v command for the pattern "a" to this file this will remove content "aa" from the file. I want the pattern only "a" should remove from the file, if it is existing. otherwise it should throw alert content not exists.
I'm trying to add text to a file for a specific group of users, I'll need to do examples as I can't think of an easy way of explaining, my file is like this:
Code:
users{ user1 user2
[code]....
At the present my code lists all the available groups, how would I add a user to a specified group? (e.g add "members user3") to the end of group 1 for example. So the code ends up like this
I upgraded to 2.6.33 and my comp will only run in Low graphics mode so i scroll down one level back to 2.6.32-22 and all is fine.How can i 'erase' the 2.6.33 line (and the associated recovery line)from the grub list so i dont have to deal with it till bugs are removed later on.My Grub header says it is version 1.98-1ubuntu6
What I want to do is when the records have identical $3 i.e. same gene:blabla, I want to put them in a file with $3.out (P.S. along with the lines below it) I tried grepping out $3 first separately onto a file, and then taking each line in that file as a pattern and pulling out records using awk. Somehow I faced probs with pulling out onto $3.out
I want to traverse a directory and get a list of files that contain a set of patterns. I assumed I could use grep for this, but I having trouble getting grep to only return files that match ALL patterns. Here's what I've come up with so far:
However, this gives me a list of files that match ANY of the patterns in the searchpatterns.txt file. I want to match ALL of the patterns. I've looked through the man page, but can't find anything that allows me to change the "OR" to "AND" for multiple patterns.
I need to grep the lines between pattern 1 and pattern 2 and not the lines following pattern 2. Cannot use grep -A(num), as there are varying number of lines following pattern 1. Also, used awk one-liners, but results are erroneous.
I used to use CCleaner so I could keep specific cookies from being deleted while I deleted all others. Is there any way I can do this with Ubuntu? Firefox doesn't seem to allow it other than manual deletion which is not as fast an automated as CCleaner made the task.
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
In Midnight Commander, is it possible to exclude some directories/patterns/... when doing search? (M-?) I'm specifically interested in skipping the .hg subdirectory.
Is there a package I can download for Ubuntu that would allow me to type in,for example, cd [tab key] and then it would go through the recent cd commands I've typed in?