General :: Grep -- Finding Files That Contain Two Separate Strings?
Dec 9, 2010
I've got a quick grep question. I'm trying to work out a command I can use to locate all of the files in a directory that have sql database connection details. I want to do it by looking for the strings "localhost" and the name of the database.find . -type f -exec grep -l -E '^(localhost|DATABASE_NAME)' {} ;
I have two text files in the form:1 ItemA [value]2 ItemB [value]3 ItemC [value]Some of these items are common for both files, while others are missing from either one or the other.I want to compare the values for each common item in the two lists, but don't know how. I have a vague idea that probably grep might be useful, but I don't know how to use it for this purpose.So, to sum it up, what I would like to do is to take to text files containing lists and merge their common items into a third file in the form:
I want to search and replace strings in a file with strings in other files/i need to do it with big strings(string1 is big) and i want to use a txt file for this.But this code not working :
i have a problem in finding block of identical strings...i solved the problem in finding consecutive identical words and now i want to expand the code in order to find and remove consecutive identical block of strings... for example the awk code removing consecutive identical word is:
I am trying a search for a pattern in the file. I can have any character in the pattern. I am pretty sure I will have $, ", ', ^, ` etc., The Problem I am facing is if I use "" (double quotes) to enclose the pattern, it gives special meaning to $, ^ and " within the string. I have no control over the pattern input. I am getting it from some other file. On the other hand, If I use '' (single quotes) to enclose the pattern, it gives special meaning to the ' (apostrophe) within the string and terminates the pattern prematurely. How do I disable the special meaning these characters have? For example, in perl, I could enclose the pattern within Q and E. Is there an equivalent in grep pattern expression? I could find one in the man page of grep. Is there a solution to this problem?
I know how to use grep to output a line that matches a string. But what if I also want to output one line above every line containing a matching string, how do I do that?
I would like to find all the files that contains the strings I'm searching.
For example (it's just an example), I would like to search all the files in "/etc" that contains "eth0" and "us", whatever where are located those 2 strings, the important is that the 2 strings are in the files listed.
It would be something like a "grep -lr 'eth0' *" and "grep -lr 'us' *" but in one time/command, so that I don't have to make a comparison of the 2 list of files resulting from the 2 "grep" commands given higher.
Long story short, I got a folder with nearly 800,000 php files. I would like to search each file for a string and if it exists in that file, the file gets copied to another directory. Is this possible from the terminal? So far I got: grep -i -n -r 'ppr-1792' * | cp $1 move_to_here
But this obviously doesn't work. $1 needs to be the file name that contains matching text.
what command is to be used to call strings from other files to the script and then comparing strings from two different files in the script to check if strings are matched then return the result to another script.
I want to traverse a directory and get a list of files that contain a set of patterns. I assumed I could use grep for this, but I having trouble getting grep to only return files that match ALL patterns. Here's what I've come up with so far:
However, this gives me a list of files that match ANY of the patterns in the searchpatterns.txt file. I want to match ALL of the patterns. I've looked through the man page, but can't find anything that allows me to change the "OR" to "AND" for multiple patterns.
I am looking for all the files that contain the text string 'moo.sql'. I ran the following:
find . -name '*.php' | grep -lir 'moo.sql' *
Unfortunately it seems to return non-php files in addition to php files. I thought the find portion of this would filter the file names so grep would only search php files.
I want to find files containing the "$" char (ascii 0x24). 'Grep -irl $ *' would output the names of every file in path *, of course, because it means end of line (EOL). So giving grep the string "$" won't do. So I tried 'grep -irl $ *'. But this doesn't work either and I do not understand why. Am I not escaping the dollar sign? grep should interpret it literally. Neither 'grep -irl "$" *' will work. Fortunately, there's LQ, besides grep's man page.
If I type 'grep alias .bashrc' a whole load of stuff comes up. However, if I type 'grep alias *' nothing comes up. Is there some switch for including 'hidden' files - like the -a switch for ls?
I am new to linux as well as awk, grep or sed. I need a find and replace command single liner or script that loops trough input file (file1) and find the particular input in file2 and add "!" in front of the found string.
Example: input file: file1 g+h=o+p a+b=c+d file2 (file that need to look for) a+b=c+d1e105 x+y=z+s5e105 g+h=o+pabcdefg t+r=w+qxvyderf
Output file (file3 should look like this) !a+b=c+d1e105 x+y=z+s5e105 !g+h=o+pabcdefg t+r=w+qxvyderf
I have tried many awk and sed method of find and replce but it did not work the way I wanted. This is mainly due to my lack of experience in awk and sed. The program should loop trough file1 and find in file2 and output in file3 for the 1st (g+h=o+p) set then repeat the same process again for set 2 (a+b=c+d).
This has to also show the line count. I can get it to show the files but not the line count. What is the single command used to identify only the matching count of all lines within files under the /etc directory that contain the word „HOST? List only the files with matches and suppress any error messages.
I would like to know how to use grep command to filter the log files created between 3:00 PM to 4:30 PM in buch of log for whole day in different headings. This files resembles like sar file in linux.
I am trying to grep a particular string from the files of 2 different servers without copying and calculate the total count of its occurence on both files. File structure is same on both servers and for reference as follows:
I am trying to develop a method of reading files generated by other programs. I am trying to find the most versatile approach. I have been trying bash, and have been making good progress with sed, however I was wondering if there was a "standard" approach to this sort of thing. The main features I would like to implement concern reading finding strings based on various forms of context and storing them to variables and/or arrays. Here are the most general tasks:
a) Read the first word(or floating point) that comes after a given string (solved in another thread)
b) Read the nth line after a given string
c) Read all text between two given strings
d) Save the output of task a), task b) or task c) (above) into an array if the "given string(s)" is/are not unique.
e)Read text between two non-unique strings i.e. text between the nth occurrence of string1 and the mth occurrence of string2
As far as I can tell, those five scripts should be able to parse just about any text pattern. I am by no means fluent in these languages. But I could use a starting point. My main concern is speed. I intend to use these scripts in a program that reads and writes hundreds of input and output files--each with a different value of some parameter(s).
The files will most likely be no more than a few dozen lines, but I can think of some applications that could generate a few hundred lines. I have the input file generator down pretty well. Parsing the output is quite a bit trickier. And, of course, the option for parallelization will be very desirable for many practical applications.
I have a situation where a directory has about 1.5 million files in it. On an hourly basis, I want to be able to find any files that have changed in the last hour, compress them, encrypt them and then copy them to both a local backup machine and an off site backup.
Is there any kind of utility or kernel module that creates some type of log of modified files? I know I can use find, but the search for -mtime in this directory takes quite a while and will not suffice for an hourly backup.
i am a newbie in linux ,i am writing a bash script to identify the files which are exactly 7 days ( a week old) i tried this command find /var/backup -mtime +7 -exec ls -d {} ;but this gives me even the files which are older than 7 days
I am looking for this `struct messages_sdd_t` and I need to search through a lot of *.c files to find it.However, I can't seen to find a match as I want to exclude all the words 'struct' and 'messages_sdd_t'. As I want to search on this only 'struct messages_sdd_t' The reason for this is, as struct is used many times and I keep getting pages or search results.The directory I am searching in, has another directories so it will have to search recursively.I have been doing this without success:Code: find . -type f -name '*.c' | xargs grep 'struct messages_sdd_t'and thisCode: find . -type f -name '*.c' | xargs egrep -w 'struct|messages_sdd_t'
I'm wondering if you can share some tips in regards to finding .conf files in programs when installing using package managers. I'm scratching my head on the fact that when you install a program through yum/apt-get, I don't know what and where the software is being installed at. In Windows, I know that when it installs an application, it goes into the Program Files directory, it's that simple.I know Linux has predefined directories for applications but sometimes it installs configuration files in /etc or some other locations in /usr which I have a tough time sifting through.
Is there a way to trace what .conf or any files for that matter which relates to what software that needs it? It's just hard for me to understand what file relates to what application at the moment. As much as I would like to learn more about Linux, this process for me takes up alot of time. I hope you can help me out on this one.
I know find can do what I am looking for, but I am wondering if there is an alternative way to find files on the filesystem either created before/after a certain point, or at a certain time.
Typically I rely on updatedb & locate for most of my file searching needs. Issues with those tools, though, are that it only has directory and file names, and it only creates a database of local directories, not anything mounted via CIFS|NFS or via -o loop (eg, .iso images).
So if I need to find files created after yesterday across the entire system (local and remote filesystems), I am currently needing to use find.
What other tools, if any, would accomplish this in a similar fashion?
I have tried ls and grep, but that requires (in my attempts so far) multiple searches:
ls -lR | grep Aug | grep 10 ls -lR | grep Aug | grep 11