General :: Remove All Lines In A File Containing Sameword?
Aug 6, 2010
When i want to remove particular lines containing a specific word in from entire document at a time,i am using the following command.
awk '$columnno !~/specificword/' inputfile > outputfile
But here, coulmn no is my problem, because iam having this in different columns. So i need a solution for it.
How to write such removal command without mentioning column no. , ie irrespective of column no, it has to remove all lines having that specific word.
View 10 Replies
ADVERTISEMENT
Jun 16, 2010
Is there any commands or scripts to remove only selected line in the history file.
View 1 Replies
View Related
Jul 6, 2011
anyone has ideas how to remove lone lines from a text file?
If I have a file that is like this:
-----------------------------------
line 1
[code]...
View 14 Replies
View Related
Mar 17, 2011
Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
View 4 Replies
View Related
Jan 21, 2011
I'm trying to come up with ideas for a simple way to strip a specific "entry" from a text file.I know tools like sed and perl can remove specific lines from a file but I haven't been able to come up with an elegant way to do my group of lines.In my file, the first "Location" line and the "SVNPath" line should be unique every time... but are they enough to strip out the whole set of the group plus the trailing one line of white space separating each group? Add to this, my file will grow as new entries are added (always appended to the end) but new entries will have the same formatting.
View 9 Replies
View Related
Sep 6, 2010
I am creating my own address book Python program and I want to create a nction that removes some specified entries. The code looks like this now.
Code:
def remove():
delentry= raw_input('Enter the entry name to delete: ')
[code]...
View 1 Replies
View Related
Dec 16, 2010
Contained within each of these 67 text files is about 1 million urls. Yes. I have 67 text files that contain 1 million lines of urls each. I am sure I am swimming in duplicates. I tried opening one text file and clicking sort ----->remove duplicates. Now Gedit is not responding my processor is maxed out to 100% and I think I am finally ready to delve into some command line code. Can anyone give me idiot proof instructions on how to sort the duplicates out of each one of these 67 text files? How about no duplicates across all 67?
View 7 Replies
View Related
Jun 21, 2011
I a csv-file (A.csv) with a total of 4.600.000 lines. Thats to many and only a few is necessary. I have a txt-file with 150 lines (X.txt) (all lines is dataset from a mainframe and looks like abc.def.123.456. How do I remove lines from A.csv where none of the dataset from x.txt is present?
View 13 Replies
View Related
Apr 14, 2010
i have a big file of random numbers i generated at some point in time, after working with it with different things(how fun that was)... i want to remove duplicate lines and i'm not sure i'm doing this right
heres the command
Code:
sort random.txt | uniq -u > rand-shorter.txt
the file is pretty big, everything on a new line. i found the command on a web site so i'm sure its correct(bit of a command line in linux newbie)
can anyone confirm if this will remove lines duplicate lines (keeping one copy) and dump what is left in a file named rand-shorter.txt?
EDIT: i think its actually working, just taking a reallllly long time (on an old pen 4 from 2000)
View 8 Replies
View Related
Nov 24, 2009
How do you remove parts of strings using python? Such as, if I have something like:
Code:
erme1 sdifskenklsd
erme2 sdfjksliel
[code]....
View 3 Replies
View Related
Oct 26, 2009
I have this massive table file with some data in it and I want to replace some lines that are wrong with the correct ones that are in another table file of the same format. The wrong lines are not all together in a block but randomly distributed so I need to make a loop checking if the line is in the other file and if it is, replace it. I want to try and do it with sed or awk but I don't really know how to....
View 12 Replies
View Related
Mar 14, 2011
I have a large file and need to remove all the lines containing symbol/symbols.
For example: . , ! " # $ % & / ( ) = ? � � ' � + * � { } ] [ - _ : ; , > < (maybe more)
View 13 Replies
View Related
Aug 30, 2010
I have a file that looks like this:
1
2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20 21
22 23 24
1
2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20 21
22 23 24
1
2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20 21
22 23 24
...
I would like to reformat it to look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
...
Is there a nifty awk/sed one-liner to do this operation?
View 6 Replies
View Related
Jan 28, 2009
I have a text file called file1.txt containing many lines eg.
line1
line2
line3
line4
line5
line6
Then i have another text file called file2.txt contains
3
5
6
Is there a command to remove the lines in file1.txt based on the keywords in file2.txt? note: It should remove line3,line5,line6 based on 3,5,6
View 10 Replies
View Related
Apr 19, 2010
I need to filter the log from a massive wget. I want to remove the progress lines and only leave the last one. Now each progress line starts with a newline '
View 2 Replies
View Related
Jun 4, 2010
I have a data set that takes the form...
0.0 43
12572.9102 80.8521 263.3575 0.0200 12.6358 -86.4942
4.3870e-06 -0.3547
[code]....
View 1 Replies
View Related
Jul 20, 2011
I have model output data in ascii format. It contains thousands of lines. The output file contains multiple text lines with variable values. here I copy-paste some of it's contents.
Code:
73438 170 23:53:20 3.481328E-03 1.824611E+04 1.824612E+04 1.333962E+16
73439 170 23:56:40 3.481210E-03 1.824611E+04 1.824612E+04 1.333962E+16
73440 171 00:00:00 3.481093E-03 1.824611E+04 1.824612E+04 1.333962E+16
[code]....
i want to remove the lines starting by WRT and DEF.
View 7 Replies
View Related
Oct 11, 2010
I would like to know how to remove X lines from output. i have a test file and i want the output without the first 2 lines
[root@node1 ~]# cat test
1
2
3
[code]....
View 5 Replies
View Related
May 8, 2010
I have a file that contains lines representing the nodes of a polyline but I only need the first point in each segment. With the following text:
0,"013A",0.57,260739.891,4379258.87
0,"013A",0.57,260737.674,4379258.94
0,"013A",0.57,260684.628,4379258.35
1,"013A",0.545,260769.915,4379257.84
1,"013A",0.545,260739.891,4379258.87
[Code]....
The problem with uniq is that the last two colums will differ. I don't care about the x/y for any points following the first one.
View 4 Replies
View Related
May 12, 2010
I have a txt file with couple of comment lines:
Number of title = !num!
#line1
#line2
#line3
I wrote a script with "sed" to replace !num! in this file, which is very straightforward. However, based on the !num!, I want to remove the number of "#" based on the !num! value. Is there an easy way to do that with "sed"; otherwise, i will have to write a script to loop through the file.
View 8 Replies
View Related
Dec 26, 2010
I have a text file that each contains either a domain or an IP, like this:
Code:
[me@server ~]# cat file1
122.foo.com
yahoo.com
23345229.com
[code]....
I want to remove all IPs in that file and keep others, so the result be like:
Code:
[me@server ~]# cat file2
122.foo.com
yahoo.com
23345229.com
[code]....
View 6 Replies
View Related
Jan 23, 2010
I want to be able to remove the first character of a line when I highlight multiple lines in gedit. Example:
%Example is
%Commented Code
%Uncomment using this shortcut
I would then highlight/select these lines, and remove the first character to make it look like this:
Example is
Commented Code
Uncomment using this shortcut
I'm pretty sure there is an actual shortcut for this. If there is another text editor on Linux that it would work in, it would be nice to know how to do it in that editor as well.
View 2 Replies
View Related
Nov 10, 2010
I have a CSV file that's created in an application that can't output lines longer than 250 characters. the data fields, all together, are longer than this. how would I remove the line break from every line that ends with a comma? For example:
A,B,C
D,E,
F
G,H,I
becomes:
A,B,C
D,E,F
G,H,I
View 1 Replies
View Related
Oct 4, 2010
I'm looking for a script (bash, python, perl etc) or even a one liner (sed, awk etc) that can take a set of files and remove any line that has more than "x" instances of any character (case sensitive). I have been doing a lot of searching and can only come up with examples of how to remove blank lines, lines that start with a certain character or lines that contain a certain string. This will be used on a system running a Kubuntu derivative.
As a very poor and basic example, I would like to take files that contain lines like:
Code:
And end up with the files only containing the lines:
Code:
If I tell the script that 2 is the maximun number of times any character can appear in any line.
I know this must be possible, but for the life of me I cannot find even an example that will lead me in the right direction or better yet a piece of code I can use.
View 15 Replies
View Related
Sep 17, 2009
I'm looking for a way to insert the number of lines in a file to the start of the aformentioned file. This should be simple but as I am not used to scripts in Linux, I am finding it tough going. I can find the number of lines in a file easily enough via
filesize=$(awk 'END {print NR}' $1)
but as for inserting this into the first line, i'm failing to do so. I've tried some of the other approaches on these forums but none so far have been able to do so.
I've tried:
sed '1i$filesize' $1
but sed i requires a string, not a variable so no go I've also tried:
mv "$1" "${1}.bak" 2>/dev/null || touch "${1}.bak"
cat $filesize "${1}.bak" >"$1"
but again with no luck as cat seems to need an input stream Just to recap, i want to insert a line at the start of a given file that holds the number of lines the original file has.
ie the file:
a
b
c
d
e
should become:
5
a
b
c
[code].....
View 3 Replies
View Related
May 12, 2010
I am using RHEL 5.I have a very large test file which cannot be opened in vi.The content of the file has some 8000 lines.I need to view ten lines between 5680 to 5690.How can i view these particular lines in a large file.what is command and option i need to use.
View 1 Replies
View Related
Jan 6, 2011
I have a large text file containing over 180k lines and another text file containing about 1k. I would like to remove lines in the 180k-line file that exist in the 1k-line file.
View 5 Replies
View Related
Apr 17, 2009
I would like to modify the content of a text file in Linux, in the following way:=> the file has several of these lines:./run_pest3 ./g134366.04080_0.062 x 2_d043 1 0.43 results_EC=> I want to modify all lines to be:./run_pest3 ./g134366.04080_0.062 x 2_d043 1 0.43 results_EC0.062i.e., the last number of $2 should be "attached" to the end of $7, for each line.
View 5 Replies
View Related
Aug 5, 2010
I have a file similar to this:
I need to print the last line for each user into a file. The resulting file should look like this:
Is there a way that AWK can match lines from $1 and then print the last line into a file?
View 7 Replies
View Related
Feb 22, 2010
I'm trying to output two certain fields of a very large text file to 2 very small text files. Then take those files and add all the lines together to come up with a total from each file (two totals).
Here's what I've got:
Code:
#!/bin/bash
echo "0" > /media/in.txt
echo "0" > /media/out.txt
[code]....
Breakdown: Put 0 in a text file to be drawn by respective while loops for math later
Output last 60 integers to a file for total A (new integer every minute)
Output last 60 integers to a file for total B (new integer every minute)
The two while loops are supposed to be adding the lines together. The echo commands at the end are for testing purposes, just to see the output. However, when I run this, I get the output of
Code:
0 0
Which is obviously not what it's supposed to be. Is there a more efficient way to do this or am I missing something in the script that would reset the values to "0"?
View 2 Replies
View Related