[Syenite] RegionUUID = 8fc56fdd-0afd-4074-9432-0ae8f42b799f Location = 9992,10007 InternalAddress = 0.0.0.0 InternalPort = 9000 AllowAlternatePorts = False ExternalHostName = 71.171.21.9 What I need to do is find out what the IP address is after "ExternalHostName ="
After that I will need to compare that IP to whatismyip and if it's different then replace it but that is easy to do with sed. I just can't figure this simple hurdle out.
How do I find a string in files in a directory. And these file names begin with letter a. I also want to get the number of occurrences of this string from the grep I run. I tried this: cat * | grep -c string but it searches all files. I just want to search files that begin with letter a
I want to match some filename in some text, but the filenames I have no control of, so "[" can "]" can appear in the filenames.so do I always have to use sed to addslashes to these variables before I have to grep them? and what other characters have I missed other than "[", "]", "."?
im tryin to make a tool in visual C++ which will take an input string through a text box,then it will compare tht string with a text file containing data and display the matched results in list box.
Q: Is there any way to use grep and sed with a string variable rather than with a file?
The problem: Im running through a LARGE (about 10,000 lines) xhtml file and need to replace every instance of lines beginning <p>~
The following code works but takes a long time mainlly because an in/out operation needs to be carried out on each line. If I could read from a string rather than a file it would take a much shorter time!
What's the easiest way to search for a string in a text file in GNOME or on the console? I used to do this in kfindfile back on KDE.I'd like to avoid downloading something like desktop search if at all possible because I'm away for the holidays and stuck on a dialup connection.
I have done a bunch of searches on this but the terms seem to get tangled in the more popular search of "colouring the output of grep / awk". I am trying to find a way to grep/awk through the output of a command to find text of a specific colour. The command's output has a range of colours signifying too many different things to specify using text, with colour being the only form of grouping.
I have a mail.log file, of which I want to redirect only the search strings of the sender from=<example.sender@exampledomain.com> and the size size=4537 to a file.
In every case the sender string starts as from=<> and the size string starts as size=
What would be the grep command to redirect only the two search strings to a .txt file?
how to search for those files which contain word "AM_COLLECTION=22". I need to know all the files with this string. ( I know the grep command can do it but either
I'd like to search the entire server by content. (text file) When I try grep -rl "text here", it freezes. How would you do it? And how long does it usually take?
To search a string pattern in all files in a directory and subdirectories, I am using;
Code: grep -R "myclass::my-func(" mydirectory/ Now I want grep, to search in only specific file types say *.cc. Please help me. I have read manual of grep, but could not deduce any hint. Best Regards.
I'm trying to search all .log files in ~/.irssi/irclogs/ and it's sub directories for the string 'irssi' and I had though the command I'd used for something similar before was.How should I edit the command, and is it possible to output every line found containing the string to file?
I have a file (.tmpfile) and inside it is a string which i only know part of, the rest being a random group of characters... I would like to know how to pull the whole string out of the file and into a variable.
I have a file that contains 5 fields and anothen one with two I want to take the value from user and search file1 and if the value exists then write in file2 to the $2 to the line that $1=value
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
i need to count the number of files and put the output into a variable. i used wc -l filename but i couldnt find an option to put the output to variable. example if the number o line is 5, i need the output of echo $x is 5.
I have a line in a text file that has 40 random characters within a tag and i want to change the characters to a new set of 40 random characters (alphanumeric a-z 0-9 etc)
The line in the text file looks like this:
Quote:
How would i go about doing that?
Also second question same as the above but how would i remove them instead of replacing them?
If I have a word in a text file and I need to replace it by another word (for example, i need to replace abc by fff) so what is the command I can type it?
I'm trying to manipulate a large text file full of records (metadata - one complete record per line). I need to delete every line on which certain words appear - there are five different words, all pretty simple all-caps strings with occasional whitespace. I tried using grep -v, which worked a treat, but only string-by-string. Ideally I'd like to run this as grep -v -f, where the file targeted by the -f contains the strings I need to match in order to delete the lines they're in.
i.e. grep -v -f filecontainingSTRINGS.txt targetfile.txt > outputfile.txt
When I try this, however, I don't get any matches - or more specifically, no changes are made in the output file. It works fine if there's only one string in filecontainingSTRINGS, but it doesn't work if there's more than one (I'm using newline as the delimiter). (Also my machine doesn't recognise /usr/xpg4/bin/grep - no idea what that's all about!)
I'm trying to isolate a number from a text file using sed. The text file looks like this:
-GARBAGE-GARBAGE-GARBAGE- Number of frames: 183933 frames Codec -GARBAGE-GARBAGE-GARBAGE-
I tried the following:
Code: sed "s/^.*Number of frames: //g; s/ frames Codec.*$//g" "info.txt" > "frames.txt" Strangely, it only seems to be stripping off the end, but not the beginning, like so: -GARBAGE-GARBAGE-GARBAGE- Number of frames: 183933
I'm obviously not using the command correctly, so what am I doing wrong?
I've got a big text file in which I know have probably made some typos (LaTeX). Sometimes I rewrite sentences several times and then end up with double pieces like "the the" or "is is" without noticing it. Most spell checkers that I can use in LaTeX are very basic so they do not notice these grammar errors. Is there a way that I can search for these repetitions by hand using sed or awk or something along these lines? Is there an app for that?
I have a question about the prompt. it is very easy to tune it for it to be colored and display path where you are etc. But my problem is that when the path is too long I would prefer the code line to be on the folowing line...
Ex 11:00 me@host a/short/path > ls -ltr ./stuff 11:00 me@host a/very/very/very/long/path > ls -ltr ./stuff
and to be honnest as I am very new in LINUX I don't know how to do this...