How do I find a string in files in a directory. And these file names begin with letter a. I also want to get the number of occurrences of this string from the grep I run. I tried this: cat * | grep -c string but it searches all files. I just want to search files that begin with letter a
I just want to grep for a n digit number followed by M alphabet. Generally for a three digit number i can give grep [0-9][0-9][0-9]M , but if the digits are increasing it is tough to represent them.
1) I need to search a field value to check for exact 0. If the number is 0, it should throw error.
The line to be searched looks like as below. "Output Rows [1], Affected Rows [1], Applied Rows [1], Rejected Rows [0]"
Here I have to search whether the affected rows is 0. But the code below picks up other values also (lie 10, 20.. etc). How do we write to get an exact match for 0? Code: affected=`echo ${line} | cut -f6 -d" " `
affectedcount='echo ${affected} |grep 0 ` 2) Also, I need to check whether the rejected rows > 0 Code: rejected=`echo ${line} | cut -f12 -d" " ` rejectedcount='echo {rejected} |grep [1-9]`
3)Can we combine these two statements in a better way to get the desired results?
someone once told me that use can pass a file to grep and use that to search the contents of another file. if that is the case I'm not entirely sure why the following isn't working for me.
Let me *try* and explain what I'm trying to do, and keep in mind aside from a little command line stuff I'm a beginner to any of what I'm asking about.
So that whatever was captured in the () in the first part of the statement would be used in the 1 in the back part of the statement for every n.chatlog that might be in any of the /webserver directories at that time.
I'm looking for a way to insert the number of lines in a file to the start of the aformentioned file. This should be simple but as I am not used to scripts in Linux, I am finding it tough going. I can find the number of lines in a file easily enough via
filesize=$(awk 'END {print NR}' $1)
but as for inserting this into the first line, i'm failing to do so. I've tried some of the other approaches on these forums but none so far have been able to do so.
I've tried:
sed '1i$filesize' $1
but sed i requires a string, not a variable so no go I've also tried:
but again with no luck as cat seems to need an input stream Just to recap, i want to insert a line at the start of a given file that holds the number of lines the original file has.
I'm storing a list of strings in a file and would like to read the file and pipe each line returned to grep which in turn searches a directory for files containing the string.However this is not returning any output.
I ran into a bit of trouble making a bash script. (Desktop is a directory, and I try to get it's modification date)
Code:
lamp:~# cmd='ls -l Desktop | grep -o "....-..-.. ..:.."' lamp:~# $cmd ls: cannot access |: No such file or directory ls: cannot access grep: No such file or directory
[code]....
When I type in the command directly, without using an inbetween variable, it works fine.
I would like to write a newline delimeted rules file using PCREs for use with the grep command. Grep has the option -f to obtain the search pattern from a file, and option -P to search using PCREs. However, these two options do not work together. The -f option only seems to work with fixed string rules.A friend previously helped me get around this limitation somehow, but I can't remember how he did it. I also would like the ability to add comments at the end of each rule in the file.
I have a number of files:FooBlahhFooI only want to be able to grep for names in a file that contain Foo and not BlahhFoo. However I am not able to pull only those files away. How can this bee done. My grep/zgrep knowledge only goes this far at this point. I'm still learning but I'm stuck on how to make my arguments more precise zgrep 'Foo' SomeFileIMade.gz > /home/user/FOOFILE
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
I would like to grep all values other than encrypted password from /etc/shadow fileFor example,each line consists of 8 fields separated with :/The only thing that I want not to print out is the contents between first : and second : (encrypted password)
how to update a series of values from multiple grep commands outputs to be appended to a single row of a csv file? Work on a linux envir. The values from grep output will be numeric values.
Output sold look like:
1,3,4,5,7,0,5
Each of these values will be odtained from multiple grep commands piped with wc -l Is it possible to update a single row of a csv file if so pleas ehelp me with the command to be used to redirect the output into the csv file
remove a line starting with specific word with grep. Here is what I found
grep -v '^cc$' data.txt
Here I remove all lines with on 'cc' in that line. But I want the result write back to data.txt
I try several ways
grep -v '^cc$' data.txt > output.txt # works but to another file echo `grep -v '^cc$' data.txt` > data.txt # didn't work, all carets gone, become one line grep -v '^cc$' data.txt > data.txt # data.txt is empty after running this
How can I save the result of grep to the input file?
To search a string pattern in all files in a directory and subdirectories, I am using;
Code: grep -R "myclass::my-func(" mydirectory/ Now I want grep, to search in only specific file types say *.cc. Please help me. I have read manual of grep, but could not deduce any hint. Best Regards.
I'm trying to manipulate a large text file full of records (metadata - one complete record per line). I need to delete every line on which certain words appear - there are five different words, all pretty simple all-caps strings with occasional whitespace. I tried using grep -v, which worked a treat, but only string-by-string. Ideally I'd like to run this as grep -v -f, where the file targeted by the -f contains the strings I need to match in order to delete the lines they're in.
i.e. grep -v -f filecontainingSTRINGS.txt targetfile.txt > outputfile.txt
When I try this, however, I don't get any matches - or more specifically, no changes are made in the output file. It works fine if there's only one string in filecontainingSTRINGS, but it doesn't work if there's more than one (I'm using newline as the delimiter). (Also my machine doesn't recognise /usr/xpg4/bin/grep - no idea what that's all about!)
I need to get the max number from the name of a file
Formant of the file name: [a-zA-Z]*_[0-9][0-9]_[A-Z][A-Z].log Delimiter as underscore '_'
[code]....
known part in the above file name will be GA.log A give directory may or may not contain files in the above format or may contain file other then the above format if so then ignore it.
cmd=> ls *[0-9][0-9]_GA.log 2> /dev/null | awk -F_ '{ print $2}' | sort -nr | head -n1 | awk 'BEGIN { if ($1 >0 ) x=0; else x=1 } END {printf "%02.0f ", $1+1}'
output=> 01
if there are no files the output should be 01 and if file(s) found the output should be next highest number+1, In the above example it is 06 My cmd is bit lenghty. reduce my cmd and it should be in one line.
I'm currently trying to design a small, simple enough shell program for area codes. I have a list of area codes in a database, and I am trying to write a program that will have a user input an area code, and then have the program print out information that immediately follows that area code in my database. I assume I need to use a find or locate command, but I'm not sure if I should be searching for a string or the number itself. The number could possibly occur at some other point in the file, though the way I have the file set up it only occurs once at the newline.
what function I should use and how I should go about it? As is I only have the absolute bare-bones beginning of having an echo for the prompt to input an area-code, and the read once it's input. Without the find I'm not sure how much farther I can get. Also, would it make it easier if I added some character such as a ! to the end of the number at the newline to make it easier to search for? With a macro that would be easy enough to do.
when I was first trying out Linux and installing Partitions, I did it right, but I used 200GB of Space, and so I decided I didn't want to use it VIA Partition and I wanted to use it VIA Wubi... and I didn't know the correct way to uninstall it... So I went to Windows Partition Manager and manually deleted the partition myself with the OS in it, but the thing is, it turned into 200GB of Unallocated space, and I couldn't give it back to my C: so it's just there... and now, a month *Present day* I want to install Mint 10 KDE and now... The big problem... I can't assign Linux Mint 10 to the unallocated space, only to the rest of the HDD... how do I assign it to the "Free" space? I tried "Specify Partitions Manually" but there was nothing that showed up. What would happen if I assigned negative 19% for Linux? Would it cause negative effects?
Can we find the inode of a particular file using its inode number?
The reason is i want to know how many blocks are occupied by specific file.
if we consider block size of 1K. if the file size is of 100 bytes. In such a case, when the file is stored on disk, the file will occupy 100 bytes or 1K (since we have choosen block size to be 1K) ?