I'm trying to split a text file into various parts. Everything in between "123" and "break" (including linebreaks) goes into the splitted file.
e.g. using this text file:
This should split into 4 files. However I'm only getting 2 files: one for the line "123break" and one for "123 blah break". The two occurrences that contain linebreaks are being ignored. The .* part of my match should capture linebreaks seeing that I'm using the /s modifier shouldn't it? Even when I use the match /(123 break)/gs it still doesn't capture the first occurrence. I'm using Perl v5.12.3 (from ActiveState) on Windows XP. The text file is also in Windows format.
Code listed below.
The above code generates two files Output_1.txt and Output_2.txt which contain "123break" and "123 blah break" respectively. I want it to generate four files.
I want to traverse a directory and get a list of files that contain a set of patterns. I assumed I could use grep for this, but I having trouble getting grep to only return files that match ALL patterns. Here's what I've come up with so far:
However, this gives me a list of files that match ANY of the patterns in the searchpatterns.txt file. I want to match ALL of the patterns. I've looked through the man page, but can't find anything that allows me to change the "OR" to "AND" for multiple patterns.
I have a file called test. It has the following contents.Code:there youI want the output to be.Code:replaced youI am trying to use the sed command to replace every occurance of "hey newline there" with "replaced". I tried the following naive apporach.Code:sed 's/heythere/replace/' testThis gives a result containing the same data as the test file.
I have a large file in which each line has three or more blank-delimited words. I'd like to code a grep to keep only those lines which have the letter M in the last word. the M (if present) will be the first character in the last word.
I need to grep the lines between pattern 1 and pattern 2 and not the lines following pattern 2. Cannot use grep -A(num), as there are varying number of lines following pattern 1. Also, used awk one-liners, but results are erroneous.
I have some big files of logs that contain errors printed by an app. They are most of the time relevant, however most of them are similar. So i figured i could check what happened between a time interval with a find.
Im using this one
And I get an output similar to this one.
Is there a way to condensate the output lines to get only one or two, indicating the start and last occurrence of a block? Or I need to create a program to do so?
Because right now I get thousands of similar lines, but when I'm scrolling through them i sometimes miss relevant information that i would've otherwise noted if it wasn't all that spammy.
How to search multiple words in multiple lines, inside a directory including sub-directory? Pls. give easy example. I want to search the files (in /xx folder and all subfolders) that have header.h included and used x() function. I tried $grep -r "header.h" | grep -r "x(" /Folder/subfolder/ > search.log
I'm trying to find exact matches of some users in the /etc/passwd file using "grep -w", but it doesn't always work. For example, I have the following users:[URl].. So, let's say, I want to search for the user "stewart" (which doesn't exist)
I have been experiencing a problem where the screen loads and after initial first few lines breaks up into multiple repetitions of lines. Reloading helps but has to be repeated when pageing down. Mail is no problem; it is supplied by my network provider. OS is openSUSE 11.2 which I update when advised. Below is a sample from the error console:
What is the best way to merge lines, in sed, awk or perl, that occur between certain strings? I'm new to sed scripting and I have been working on this for some time now. I have a large file (sample below) that I need to edit.
What I need looks something like this.
I'm working with a very large file so simply merging all the lines then adding a new line character before ">contig" and after "translated" won't work, at least not with sed.
I need a substitution of a particular string (StringA) with another string (StringB). However, there may be more than one occurrence of StringA within the file, but only one instance needs to be changed, which is why I'm trying to be sure of it's positioning against something I know will be unique in the file, and will always have the same distance from the string to be replaced. So, I intend to match on a string (StringC) above the string to be substituted and then have sed go to StringA below and replace with StringB.
So far, I have had some success with the following:
... but I can't help thinking that there *has* to be a cleaner way of doing it.
how to update a series of values from multiple grep commands outputs to be appended to a single row of a csv file? Work on a linux envir. The values from grep output will be numeric values.
Output sold look like:
Each of these values will be odtained from multiple grep commands piped with wc -l Is it possible to update a single row of a csv file if so pleas ehelp me with the command to be used to redirect the output into the csv file
Right now i have some code to catch the inputs, using a variable "z":
I'm almost positive that the problem is in the bolded line above (for one thing, it always leaves off the initial "-e"). So basically i want a string that gives me "-e input" and concatenates as many times as necessary.
In GUI style editors, you can generally select multiple lines, press tab a few times to move all the lines across (or shift-tab to go back). I have no idea how to do this in VIM.I googled around and couldn't find any straight answer to I came here.
But want to gerp / cut it in such a way that it only displays
Now the thing is that these 3 lines are not static.. there can be N number of lines there.. the only thing is that I want the command / output NOT to display the first line but the rest of the n lines ..