Is there a convenient method to find a text pattern that extends over several lines? In this case:
Empty line LineConsistingOfSingleWord
Preferably to return the line number where the pattern occurs to determine the first such after a known line number. In other words, in order to extract a block of text from within a file.
I want to go through a log file and find pattern1 and then a pattern2 only after pattern 1.So for example I want to know howManyRecords was in 13:30.I figured I grep for "start time for the job" and then only after that (and before the next occurence of that) grep for "howManyRecords". Is this a sane way?
I have 8 files, and each contains around 2000 lines. I want to search the particular word in these files between line number 1500 to 2500.
The output should look like:
sample_1.txt : 1510:declare var testing sample_2.txt : 1610:declare var testing sample_7.txt : 1610:declare var testing sample_10.txt : 1710:declare var testing
I have a file called test. It has the following contents.Code:there youI want the output to be.Code:replaced youI am trying to use the sed command to replace every occurance of "hey newline there" with "replaced". I tried the following naive apporach.Code:sed 's/heythere/replace/' testThis gives a result containing the same data as the test file.
A function by name abc is called in many files. I want to copy all the lines with the function call to an output file.A simple grep on function name doesn't help me as the function call is spanning across multiple lines as follows:
abc(parameter1, parameter2, parameter3);
So I want to copy all the three lines (till semicolon) to the output file.The problem is because there are more than 200 calls for the same function and I cannot do it manually
I need to grep a pattern which can be present in one line or could be split in 2 lines.Normal grep wont work in this case. Can anyone please help on this?There are 100's of files in which i need to search for this pattern so time is also a constrain.
I'm trying to write a bash script to find all lines containing two different strings in many files. I don't have access to egrep so I want to use sed for this purpose.
The files will look like this: FileX ------ Info:18 Data:76 Contact:me@home.com Start:1500
I want to generate a new file from these files with only the rows containing Data and Start. Something like this: for y in `ls /file*.db`; do sed '/Data|Start/p' $y > newfile done
I have some big files of logs that contain errors printed by an app. They are most of the time relevant, however most of them are similar. So i figured i could check what happened between a time interval with a find.
Im using this one
Code:
And I get an output similar to this one.
Code:
Is there a way to condensate the output lines to get only one or two, indicating the start and last occurrence of a block? Or I need to create a program to do so?
Because right now I get thousands of similar lines, but when I'm scrolling through them i sometimes miss relevant information that i would've otherwise noted if it wasn't all that spammy.
I want to search a file for a particular pattern and if pattern found replace the line with new text. i am using awk 'match($0,"pattern") != 0 {print $0} ' filename to check if the pattern exists.how do i get the line number of the pattern and delete that line and replace the line with my new text?
I have to enhance the behaviour of a backup script written in perl. I don't need to change it, what I need to do is to create a bash script that does some checks like file name and file size, execute the backup script then check if the backup files match the original files.Here's how I try to do it:
- read the files from the original files folder - store them in an array - search in the array the files that have a specific file extension - store the file names that match the search pattern (I know the backup script skips some files so I can hardcode the search pattern) - run the backup script - read the files from the backup folder - store them in an array - compare the original files name and size stored in an array with those from the backup folder - send a report email
I'm planning to partition a new hard drive to dual-boot Mint+Mepis. I've read partitioning tutorials and posts, and want to check my understanding--I'd appreciate input from an experienced person.For 500GB hard drive, dual-boot Mint+Mepis:
--Mint: / root partition for OS; /home partition for ease of upgrading --Mepis: same as Mint = four partitions
And: /swap partition to be shared between Mint+Mepis /shared partition for shared data = two partitions
Total = six partitions
Since four primary partitions are allowed, I should use three primary partitions and one extended partition containing three logical partitions.Is that correct?If so, what should go where? I assume there's an optimal strategy--Should each /root of Mint+Mepis go in a primary? What should go in the other primary, and in the three logicals? Or maybe I don't need three primaries?--use two primaries and four logicals?
I've come across an unusual requirement for a service in my Ubuntu system.Simply put, I need to find a way to search for all instances of a term in a file, delete lines containing containing that term, and delete four lines below each instance of that term. ither that, or copy the entirety of a file to a new file and skip over all lines containing the term plus four below it.This sounds kinda weird, I know. Without going too far into detail, I either have to change the logfile format for a server I'm running which is a huge pain in the butt, or I can just run a script to edit an HTML report generated from said logs. (Said report is really just for managers to peruse, and I like my log format, so I'm pursuing option 2.)
I have several (vhdl) files containing a pattern with newline characters that I need to replace by another pattern that also contains newline characters.
I start with something like:
Code:
I want to replace it by something like:
Code:
(I need to paste some lines)
As I need to do this (very) often I want to use a shell script.
I have a bunch of text files, all of them have a .txt extension. They are all located in subfolders of the /MyTextFiles folder (but could be anywhere, no idea what depth). If any line in any of the text files has the word "hello" I want to delete that entire line. I know sed and awk are made for this problem but I can't seem to get the syntax correct.
I need to find a string in a file ... then delete the line it is on, as well as the next 6 lines. Or, delete the line the string is on and all subsequent lines until the search finds the character "["
example:
filename = test.txt
contents: [foo] test>test test>test test>test
[Code]....
so, in this example. I'd like to search the file for string 'foo' and delete all lines from that line until [bar] (not deleting the line with [bar])