General :: Grep Command - Show The Files But Not The Line Count
Oct 31, 2010
This has to also show the line count. I can get it to show the files but not the line count. What is the single command used to identify only the matching count of all lines within files under the /etc directory that contain the word „HOST? List only the files with matches and suppress any error messages.
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
Write a script that will take a list of filenames as arguments and output a count of how many of them are regular files, and how many of them are scripts (if the file is executable, it will be assumed to be a script file)
I want to (from the command line) be able to counte lines in a bunch of files of a specific type in a folder and all its sub-folders. How would I do this?
I am trying to grep a particular string from the files of 2 different servers without copying and calculate the total count of its occurence on both files. File structure is same on both servers and for reference as follows:
I have many folders with many subfolders. All I want it to get the folder name and the first subfolder. I tried using ls -R but this give me more than I want it. Let say I have:
I would like to know how to use grep command to filter the log files created between 3:00 PM to 4:30 PM in buch of log for whole day in different headings. This files resembles like sar file in linux.
I have several files with many lines something like this:
I'm trying to write a script that will count the number of characters per line that doesn't contain a ">" symbol and give me an average of those values. I have most of the script together but I can't figure out how to connect some of the steps.
I have a dataset (see example below) that I would like to go through and copy all lines containing a certain string ("LGIG") plus the line immediately following that line to a new file. I have no problem grepping lines containing the string LGIG but I'm lost how to translate that to line number and shift up one line number for each instance of that string.
I forgot a lot of my command line. I am doing cat file | grep "error" and i would like it to show everything to the right of G:/ including G:/ if possible. I figure its an awk command but i dont know what. I tried awk '{print $8+}' but + does not work like i hoped and guessed.
I'm looking for a solution for the following simple problem. I have two files, fileA and fileB. Each file contains only one word per line, and they contain exactly the same number of lines. I would like to create a new file called fileAB, where the i-th line contains the i-th line of fileA, a Tab separator character, and then the i-th line of fileB. I know how to do it in Python or other scripting languages, but it would be nice to have a bash one-liner for that. Is it possible to do this in bash or any other Unix shell, using the tools that are usually available on the command line (e.g., sed, awk and such)?
how to search for those files which contain word "AM_COLLECTION=22". I need to know all the files with this string. ( I know the grep command can do it but either
I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. how to do this either with gnuwin32 command line tools or in linux shell?
I have a large amount of log files that I need to remove sensitive data from. The sensitive data is provided to me in a text file and is prone to change. I had hoped to do the equivalent of this:
[Code]....
The commented out egrep works fine, the sed doesn't. Am I right to use sed for this? Or is there a more apt route to take?
At the linux command line, I'd like to compress all .pdf files in a directory, any of it's subdirectories and so on - but only .pdf files. I'm struggling to figure out the syntax
How do I access files with spaces from the command line? for example I want to go to a file called "New File" and let's say is in Downloads/Books/(and here is the file) how do I input the space since the command line doesn't recognize it?
I have a utility that works with files. The utility is crashing at after about 120 files. The input to the utility is a file containing a filelist. I want to cut the file with the file names in it to seperate files containing about one hundred or so. My thought was to determine the number of lines/100 and then use head and delete to create temporary files to run the utility multiple times to prevent the crash. When I tried to create a variable using the wc -l command the output gives me the number of total lines but it also includes the filename of the input file. (873 Filename.txt) I can not figure out how to remove the Filename.txt from the variable.
Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?
I am using ubuntu and mysql.I have a list of many .sql files, like 1.sql, 2.sql, 3.sql ... 100000.sqlI need to insert them into the database mysql mydb < *.sqlGives me: -bash: *.sql: ambiguous redirect