Programming :: Sed/awk: Three Consecutive Blank Lines In A File - How To Delete Two Of Them
Jun 16, 2010
I have a file with three consecutive blank lines. I want to delete two and keep one.Also, if anyone could direct me towards a guide on regular expressions particularly as they apply to sed, I would be grateful. I am having a hell of a time figuring out the syntax.
I am trying to delete any blank lines within two patterns e.g.
Address: 53 HIGH STREET Cred Id : MYTOWN MYCOUNTY MM12 6MM Pay Method : Crossed Cheque
The start of my pattern is "Cred Id" and the end is "Pay Method" and I want to delete the blank lines between county and post code. I did find the code below but it doesn't seem to change anything:
sed -ne '/Cred Id/,/Pay Method/!bp' -e '/^$/b' -e -e p ll.out
I can get it to print just the range I'm interested in by doing sed -ne '/Cred Id/,/Pay Method/p'.
Each line represents a portion of a data matrix. I want to convert the numbers after the "=" to the range of that partition in the matrix such that the output file looks like this:
Order of these lines are random... So I cannot delete line #19, for example... And you can see that top four lines I want to delete are pairs. So there might be some clever way to detect the lines, if a line has both "1.9" and "1.11", then delete the line... I am new to perl language. The following is the code I have now... I think I just need to write some code inside the while loop checking if I want to delete the line $dotline before I write to a NEW file.
All I want is a command that reads one data file with several columns and prints it in another one.However, whenever the value in one specific column alters, it prints one empty line in the new file. For example, consider the file
i want to grep lines which do not start with # or a blank space. like
bla bla bla bla
how do i do this? i tried grep --invert-match '^#' which gives lines not starting with # but gives me blank lines too i tried grep --invert-match '^#|^ ' which will give lines not starting with # OR not starting with blank ( which means any line including ones starting with #
My problem is like this I have to delete all lines between two pattern match example- suppose below is the content of the file then i have to delete all lines between text1 and text2
there is a way to add line spaces when asking for user interaction in a script. For example:
Code:
SPACE Hello what is your name? SPACE SPACE
So this is asking a question but has a space/empty line at the top of the screen and 2 spaces/empty lines below. I've seen it done in a bash script using for each line/space needed
I have a bunch of text files, all of them have a .txt extension. They are all located in subfolders of the /MyTextFiles folder (but could be anywhere, no idea what depth). If any line in any of the text files has the word "hello" I want to delete that entire line. I know sed and awk are made for this problem but I can't seem to get the syntax correct.
I have a file that contains 100 ligns, i need to write a script that read 70 lignes and redirect those 70 ligns to another files and these 70ligns have to be erased in the first file
when i write this command head -70 somefile.txt>test.txt or
sed -n 70p somefile.txt>test.txt
i have these 70 lines in the text.txt files but these 70 lines have to be deleted inthe first file somefile.txt
i've got a file with sorted words - one on each line.How could it be possible to delete thouse lines that have words of length 1 or 2 (1-2 letters). I guess a good way it will be with AWK, n its fuction length(), but getting it, i dont know how to delete those very lines.
I need to chop of the top 30ish lines of several log files until a line starting with "Initialization completed."The trouble is that it's not always the same amount of lines that need to be deleted, and they don't always contain the same information, which is why I would need to delete everything priorhe line starting with "Initialization completed."Right now I have a little script I wrote based on looping each file through several "grep -v" commands with each known pattern of lines I want to ignore, but it is tedious and I have to inspect each file afterwards to make sure nothing is left from above "Initialization completed
I am currently using a command like this to remove blank lines and lines which contain (not necessarily begin) with a #. Is there a better/simpler command?
cat /etc/apache2/default-server.conf | sed /^$/d | grep -v '#'
I need to find a string in a file ... then delete the line it is on, as well as the next 6 lines. Or, delete the line the string is on and all subsequent lines until the search finds the character "["
example:
filename = test.txt
contents: [foo] test>test test>test test>test
[Code]....
so, in this example. I'd like to search the file for string 'foo' and delete all lines from that line until [bar] (not deleting the line with [bar])
I've come across an unusual requirement for a service in my Ubuntu system.Simply put, I need to find a way to search for all instances of a term in a file, delete lines containing containing that term, and delete four lines below each instance of that term. ither that, or copy the entirety of a file to a new file and skip over all lines containing the term plus four below it.This sounds kinda weird, I know. Without going too far into detail, I either have to change the logfile format for a server I'm running which is a huge pain in the butt, or I can just run a script to edit an HTML report generated from said logs. (Said report is really just for managers to peruse, and I like my log format, so I'm pursuing option 2.)
Basically, I am provided with a file "temp.dat" with 30 high temperatures (integers) in it. The program is supposed to read them in and compute/print the average. Then it is supposed to print the temperature of each day and, in addition, display a + by each day that is over the average, but only if it is above the average high for three or more consecutive days. This is the part I am stuck on. I'd appreciate any tips that would point me in the right directionFull disclosure: This is a school project. Code:
i have two files with thousands of line, I am trying to combine these two files but i want to insert each line of one file to the another file after certain lines. I am using awk with the following command but it does not work.cat file1 | awk ' { print $0; if (NR%3004==0) {print "file2"}}' > outputfile
For example, I have a file called "file" like this one: type=strongsubj len=1 word=absolve pos=verb stemmed=y priorpolarity=positive type=strongsubj len=1 word=unique pos=adj stemmed=n priorpolarity=neutral type=strongsubj len=1 word=absolutely pos=adj stemmed=n priorpolarity=neutral type=weaksubj len=1 word=taking pos=verb stemmed=y priorpolarity=positive type=weaksubj len=1 word=friend pos=noun stemmed=n priorpolarity=positive type=weaksubj len=1 word=usually pos=adverb stemmed=n priorpolarity=positive type=strongsubj len=1 word=purecolor pos=anypos stemmed=n priorpolarity=negative type=strongsubj len=1 word=accusingly pos=anypos stemmed=n priorpolarity=negative
I want to add the plural for the noun, for example if find this line: type=weaksubj len=1 word=friend pos=noun stemmed=n priorpolarity=positive will add one more line : type=weaksubj len=1 word=friends pos=noun stemmed=n priorpolarity=positive where we add "s" for the word friend I did try to do like that: <code> cat file | while read LINE ; do
set -- ${line} if [[ "${4#pos1=}" == "noun" ]];then #I tried this line but it doesn't work properly: v3==$(echo $line |sed 's/$3/$s') #I want to find the third word "word=friend" in that line and add "s" after that word # I don't know what command to add this new line "$v3" to the file ??? done </code>
Every now and then I have to indent the lines in my script to 4 space characters. I generally do it line by line. Is there an automated command in vi using which I can indent some set of lines to desired number of space characters in one go.
I have a plain text file with 360 lines of varying length text. How do I add a comma or other symbol to the end of each line so that I can convert the file to csv format that I can open in a spreadsheet (45 rows, 8 columns). That means each 8 lines of text forms 8 columns, with 45 rows.
Using awk I pull the first field of a random line from my datafile.myvar1=`awk -F" " 'NR=='$randline' {printf "%s", $1}' myfileThis works fine. The problem is there will be empty lines at the end of the file. Rather than using awkto filter out blank lines I would like to figure this out first.So I test $myvar1 for a blank string after setting $randline to one that I know is blank:test -z "$myvar1" && echo "true" || echo "false"But, this returns "false"? So the string is not zero length. Why? It's a tab-separated file. Is awk storing the tab with the $1 field or something.This is where I get headache. I try to echo my variable to see what it looks like.
echo "$myvar1" outputs: nothing echo "My variable is [$myvar1]" outputs: [y variable is [
Why is the closing bracket at the beginning? What character could be stored in $myvar1 that would do such a thing and how did it get there?
I am trying to read certain lines within a file and give the output of the certain lines that dont equal my value, I think showing you would be easier. There is multiples of these inside one file...
Code:
LV Name /dev/vg00/lvol1 LV Status available/syncd LV Size (Mbytes) 300lable/syncd
[code]....
I want to read everything in the file, if the status is not available then it should display the name (directly above status). If they are all availbale then do nothing. I think I know how to do it which includes putting the info in string form and placing in hash but it is proving to be out of my skill range.