I am interested in using the grep method in the shell of my CentOS machine to obtain patterns from a file and use them to search through another file and highlight the patterns found. For example:
I am trying to compare a list of patterns from one file and grep them against another file and print out only the unique patterns. Unfortunately these files are so large that they have yet to run to completion. Here's the command that I used:
Code: grep -L -f file_one.txt file_two.txt > output.output Here's some example data:
I want to traverse a directory and get a list of files that contain a set of patterns. I assumed I could use grep for this, but I having trouble getting grep to only return files that match ALL patterns. Here's what I've come up with so far:
However, this gives me a list of files that match ANY of the patterns in the searchpatterns.txt file. I want to match ALL of the patterns. I've looked through the man page, but can't find anything that allows me to change the "OR" to "AND" for multiple patterns.
I need to grep the lines between pattern 1 and pattern 2 and not the lines following pattern 2. Cannot use grep -A(num), as there are varying number of lines following pattern 1. Also, used awk one-liners, but results are erroneous.
I want to know that is there any method to grep a particular data from a file without using the "cat --- | grep ' ' " command....I need to use a system call for this functionality.
I am using File::Find to go through a very large tree. I am looking for all xml files and open only those that contain a tag <Updated>. I then want to capture the contents of two tags <Old> and <New>.
My problem is, after I open the file and do the first grep for <Updated> (which does work), I am unable to grep again unless I close the file and open it.
I did something like this:
Quote:
find(&check, $dir); sub check { if ($_ =~ /.xml/){ open(FILE,"$_"); if (grep{/Updated/} <FILE>){ # <-- works
On one of my servers, it appears that a bunch of html files got the following code added to it...Quote:[URL]I was going to try to remove this line using grep & sed... as sample grep -lr -e 'apples' *.html | xargs sed -i 's/apples/oranges/g'I can get the grep portion to work...
Code: grep "<script src='http://b.rtbn2.cn/E/J.JS'[>][<]/script[>]" * But not the sed
I have a huge binary log file. There are lets say 4 id's that I want to find in a log file. I know that those 4 id's will be present in the log file and I also know in what order they will be present. I want to find 1st id from the log then 2nd id and then third id and so on..
Simple/inefficient solution is: Loop through the id's and then grep in the log file. Problem with this solution is for each id grep will search from the beginning of the file.
Better/efficient solution would be: Sine I know the order in which id's will be present in the log file. Loop through id's, grep 1st id and then move on to grep 2nd id and so on...this way I can grep all id's in one pass. Is this solution possible ?
I have 500000 + values to find in log files and I have to find efficient solution for it.
I'm trying to math all class references in a C++ file using grep with regular expression. I'm trying to know if a specific include is usuless or not, so I have to know if there is a refence in cpp. I wrote this RE that searches for a reference from class ABCZ, but unfortunately it isn't working as I espected:
grep -E '^[^(/*)(//)].*[^a-zA-Z]ABCZ[]*[*(<:;,{& ]' ^[^(/*)(//)] don't math comments in the begging of the line ( // or /* ) .* followed by any character
[code]....
Well, I can get patterns like this:
class Test: public ABCZ{ class Test: public ABCZ { class Test : public ABCZ<T>
I am trying to delete any blank lines within two patterns e.g.
Address: 53 HIGH STREET Cred Id : MYTOWN MYCOUNTY MM12 6MM Pay Method : Crossed Cheque
The start of my pattern is "Cred Id" and the end is "Pay Method" and I want to delete the blank lines between county and post code. I did find the code below but it doesn't seem to change anything:
sed -ne '/Cred Id/,/Pay Method/!bp' -e '/^$/b' -e -e p ll.out
I can get it to print just the range I'm interested in by doing sed -ne '/Cred Id/,/Pay Method/p'.
I need a loop that pulls out the user name into a variable and then pulls out the LastUpdate field into another variable so I can then perform a comparison against the last update field. Requirements are AIX tools including AWK, SED and Perl I am writing a script to check AIX users password expiration dates and if they are within the alerting period (ie. 7 days etc) it will email the user. I will release the full script into the public domain once completed. The text file I want to parse is formatted like:
What I want to do is when the records have identical $3 i.e. same gene:blabla, I want to put them in a file with $3.out (P.S. along with the lines below it) I tried grepping out $3 first separately onto a file, and then taking each line in that file as a pattern and pulling out records using awk. Somehow I faced probs with pulling out onto $3.out
someone once told me that use can pass a file to grep and use that to search the contents of another file. if that is the case I'm not entirely sure why the following isn't working for me.
I am trying to monitor how long an ldap search takes and maybe notify or something that a search takes longer than say 10 seconds.
Code: tail -n 1000 /var/log/ldap.log for SRCH in $( cat monitorldap.log |grep 'SRCH'); do echo search string is echo $SRCH
[Code]....
ok, so to start off with it doesn't appear to get the whole line, just a piece "Aug". How can I get the whole line into a variable so I can then cut it up into the pieces I need?
I'm using Zabbix on which I can use give bash command to the agent.This 1-liner will give me all the interfaces with their IPv4 addresses.I have a 2nd expression which returns a checksum so I can detect a difference whenever someone deletes/adds/changes an ipv4 interface.This is the output on my Ubuntu-server:
1) I need to search a field value to check for exact 0. If the number is 0, it should throw error.
The line to be searched looks like as below. "Output Rows [1], Affected Rows [1], Applied Rows [1], Rejected Rows [0]"
Here I have to search whether the affected rows is 0. But the code below picks up other values also (lie 10, 20.. etc). How do we write to get an exact match for 0? Code: affected=`echo ${line} | cut -f6 -d" " `
affectedcount='echo ${affected} |grep 0 ` 2) Also, I need to check whether the rejected rows > 0 Code: rejected=`echo ${line} | cut -f12 -d" " ` rejectedcount='echo {rejected} |grep [1-9]`
3)Can we combine these two statements in a better way to get the desired results?
I want to see if all the records in the file are present in the contents of the files of a particular directory.
Basically I want to say if grep doesn't return anything, then report.
For example in /tmp dir I have 4 files and flast 2 values (787862348 and 766428634) are present in the files of /tmp dir, but first one (979798707) is not. I want to echo that in a reporting file.
something like:
while read line do # if ! grep -rl $line /tmp echo $line >> are_not_present done < "myFile"
How do I achieve " if ! grep -rl $line /tmp"? That is, if the line is found by grep, then grep will print the output, but if grep does'nt find it, it will print nothing. How can I check if grep didn't find it (i.e. printed nothing)?
I'm just starting out with bash scripting (yesterday, really). I want to add a file to each user's home directory, pretty simple really, and send it out via our Apple Remote Desktop system to our Macs. Here is my script: Code: #!/bin/bash
for i in $(ls -d /Users/*) doif [ -e $i/.tcshrc ] thenecho "$i/.tcshrc exists!"elseecho "$i/.tcshrc does not exist"