Order of these lines are random... So I cannot delete line #19, for example... And you can see that top four lines I want to delete are pairs. So there might be some clever way to detect the lines, if a line has both "1.9" and "1.11", then delete the line... I am new to perl language. The following is the code I have now... I think I just need to write some code inside the while loop checking if I want to delete the line $dotline before I write to a NEW file.
I have a file with three consecutive blank lines. I want to delete two and keep one.Also, if anyone could direct me towards a guide on regular expressions particularly as they apply to sed, I would be grateful. I am having a hell of a time figuring out the syntax.
Through various Windows reinstalls and switches within Linux distros, I have a massive amount of duplication within my music archive (on the order of 7+ dupes of each file). Now, I found a lovely program called "fdupes" and was able to build a list of all the duplicate files, and I'm trying to use "xargs" to remove then. However, when I try and run the command "xargs -0 --arg-file="dupes.txt" rm" or "xargs -0 rm < "dupes.txt"" it give me the following error: "xargs: argument line too long".
how perhaps a different way of accomplishing the same thing?
I have a bunch of text files, all of them have a .txt extension. They are all located in subfolders of the /MyTextFiles folder (but could be anywhere, no idea what depth). If any line in any of the text files has the word "hello" I want to delete that entire line. I know sed and awk are made for this problem but I can't seem to get the syntax correct.
I have a file that contains 100 ligns, i need to write a script that read 70 lignes and redirect those 70 ligns to another files and these 70ligns have to be erased in the first file
when i write this command head -70 somefile.txt>test.txt or
sed -n 70p somefile.txt>test.txt
i have these 70 lines in the text.txt files but these 70 lines have to be deleted inthe first file somefile.txt
i've got a file with sorted words - one on each line.How could it be possible to delete thouse lines that have words of length 1 or 2 (1-2 letters). I guess a good way it will be with AWK, n its fuction length(), but getting it, i dont know how to delete those very lines.
i have two files with thousands of line, I am trying to combine these two files but i want to insert each line of one file to the another file after certain lines. I am using awk with the following command but it does not work.cat file1 | awk ' { print $0; if (NR%3004==0) {print "file2"}}' > outputfile
I need to chop of the top 30ish lines of several log files until a line starting with "Initialization completed."The trouble is that it's not always the same amount of lines that need to be deleted, and they don't always contain the same information, which is why I would need to delete everything priorhe line starting with "Initialization completed."Right now I have a little script I wrote based on looping each file through several "grep -v" commands with each known pattern of lines I want to ignore, but it is tedious and I have to inspect each file afterwards to make sure nothing is left from above "Initialization completed
I need to find a string in a file ... then delete the line it is on, as well as the next 6 lines. Or, delete the line the string is on and all subsequent lines until the search finds the character "["
example:
filename = test.txt
contents: [foo] test>test test>test test>test
[Code]....
so, in this example. I'd like to search the file for string 'foo' and delete all lines from that line until [bar] (not deleting the line with [bar])
For example, I have a file called "file" like this one: type=strongsubj len=1 word=absolve pos=verb stemmed=y priorpolarity=positive type=strongsubj len=1 word=unique pos=adj stemmed=n priorpolarity=neutral type=strongsubj len=1 word=absolutely pos=adj stemmed=n priorpolarity=neutral type=weaksubj len=1 word=taking pos=verb stemmed=y priorpolarity=positive type=weaksubj len=1 word=friend pos=noun stemmed=n priorpolarity=positive type=weaksubj len=1 word=usually pos=adverb stemmed=n priorpolarity=positive type=strongsubj len=1 word=purecolor pos=anypos stemmed=n priorpolarity=negative type=strongsubj len=1 word=accusingly pos=anypos stemmed=n priorpolarity=negative
I want to add the plural for the noun, for example if find this line: type=weaksubj len=1 word=friend pos=noun stemmed=n priorpolarity=positive will add one more line : type=weaksubj len=1 word=friends pos=noun stemmed=n priorpolarity=positive where we add "s" for the word friend I did try to do like that: <code> cat file | while read LINE ; do
set -- ${line} if [[ "${4#pos1=}" == "noun" ]];then #I tried this line but it doesn't work properly: v3==$(echo $line |sed 's/$3/$s') #I want to find the third word "word=friend" in that line and add "s" after that word # I don't know what command to add this new line "$v3" to the file ??? done </code>
Every now and then I have to indent the lines in my script to 4 space characters. I generally do it line by line. Is there an automated command in vi using which I can indent some set of lines to desired number of space characters in one go.
I have a plain text file with 360 lines of varying length text. How do I add a comma or other symbol to the end of each line so that I can convert the file to csv format that I can open in a spreadsheet (45 rows, 8 columns). That means each 8 lines of text forms 8 columns, with 45 rows.
Using awk I pull the first field of a random line from my datafile.myvar1=`awk -F" " 'NR=='$randline' {printf "%s", $1}' myfileThis works fine. The problem is there will be empty lines at the end of the file. Rather than using awkto filter out blank lines I would like to figure this out first.So I test $myvar1 for a blank string after setting $randline to one that I know is blank:test -z "$myvar1" && echo "true" || echo "false"But, this returns "false"? So the string is not zero length. Why? It's a tab-separated file. Is awk storing the tab with the $1 field or something.This is where I get headache. I try to echo my variable to see what it looks like.
echo "$myvar1" outputs: nothing echo "My variable is [$myvar1]" outputs: [y variable is [
Why is the closing bracket at the beginning? What character could be stored in $myvar1 that would do such a thing and how did it get there?
I am trying to read certain lines within a file and give the output of the certain lines that dont equal my value, I think showing you would be easier. There is multiples of these inside one file...
Code:
LV Name /dev/vg00/lvol1 LV Status available/syncd LV Size (Mbytes) 300lable/syncd
[code]....
I want to read everything in the file, if the status is not available then it should display the name (directly above status). If they are all availbale then do nothing. I think I know how to do it which includes putting the info in string form and placing in hash but it is proving to be out of my skill range.
To save on the writing of WAY to many files with very little in them, I want to put it all in one file and read a specific few lines. There will be six variables to be read at a time. Format is as such:
//Set 1 string name 5 12
[code]....
From name to 5th number is a set. The name will be of different lengths for each set. This will be a big file of probably 40+ sets. My problem lies in reading one and only one set be it set 5 or set 34. It needs to be done in C++.
I'm trying to come up with ideas for a simple way to strip a specific "entry" from a text file.I know tools like sed and perl can remove specific lines from a file but I haven't been able to come up with an elegant way to do my group of lines.In my file, the first "Location" line and the "SVNPath" line should be unique every time... but are they enough to strip out the whole set of the group plus the trailing one line of white space separating each group? Add to this, my file will grow as new entries are added (always appended to the end) but new entries will have the same formatting.
I have this massive table file with some data in it and I want to replace some lines that are wrong with the correct ones that are in another table file of the same format. The wrong lines are not all together in a block but randomly distributed so I need to make a loop checking if the line is in the other file and if it is, replace it. I want to try and do it with sed or awk but I don't really know how to....
I'm trying to extract specific lines from a flat file. I need lines that fall within a range of coordinates. The -F can be either ! or = If the line is in this set range I need all of the data on that line. ranges lat 36 to 39 and longitude -74 to -84
I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.