General :: Remove X Lines From Output?
Oct 11, 2010I would like to know how to remove X lines from output. i have a test file and i want the output without the first 2 lines
[root@node1 ~]# cat test
1
2
3
[code]....
I would like to know how to remove X lines from output. i have a test file and i want the output without the first 2 lines
[root@node1 ~]# cat test
1
2
3
[code]....
How do you remove parts of strings using python? Such as, if I have something like:
Code:
erme1 sdifskenklsd
erme2 sdfjksliel
[code]....
I have a large file and need to remove all the lines containing symbol/symbols.
For example: . , ! " # $ % & / ( ) = ? � � ' � + * � { } ] [ - _ : ; , > < (maybe more)
I have a file that looks like this:
1
2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20 21
22 23 24
1
2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20 21
22 23 24
1
2 3 4 5 6 7 8 9
10 11 12 13 14 15
16 17 18 19 20 21
22 23 24
...
I would like to reformat it to look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
...
Is there a nifty awk/sed one-liner to do this operation?
Consider a situation in which you want to display only specific lines of contents from a file or of a command's output. Yes, we have head and tail commands. But, how to view all the lines of a file except the last one or vise versa when we don't know the count of lines in advance?
Consider this output:
Code:
[root@localhost ~]# ps au | grep bash
root 6316 0.0 0.0 4672 1440 tty1 Ss+ Apr22 0:02 -bash
root 20847 0.2 0.0 4672 1432 pts/0 Ss Apr23 0:12 -bash
root 21167 0.0 0.0 3920 660 pts/0 S+ 01:00 0:00 grep bash
Here, I don't want the last line (in italic) to be included in the result since the last line is due to "grep bash" in the devised command "ps au | grep bash". Well, we can rewrite the devised command:
Quote:
"ps au | grep bash | head -n 2"
But, again, here we are specifying the count of lines to be included. But, in the presented problem we don't know any count in advance!
I have some big files of logs that contain errors printed by an app. They are most of the time relevant, however most of them are similar. So i figured i could check what happened between a time interval with a find.
Im using this one
Code:
And I get an output similar to this one.
Code:
Is there a way to condensate the output lines to get only one or two, indicating the start and last occurrence of a block? Or I need to create a program to do so?
Because right now I get thousands of similar lines, but when I'm scrolling through them i sometimes miss relevant information that i would've otherwise noted if it wasn't all that spammy.
I need to filter the log from a massive wget. I want to remove the progress lines and only leave the last one. Now each progress line starts with a newline '
View 2 Replies View RelatedI have a data set that takes the form...
0.0 43
12572.9102 80.8521 263.3575 0.0200 12.6358 -86.4942
4.3870e-06 -0.3547
[code]....
I have model output data in ascii format. It contains thousands of lines. The output file contains multiple text lines with variable values. here I copy-paste some of it's contents.
Code:
73438 170 23:53:20 3.481328E-03 1.824611E+04 1.824612E+04 1.333962E+16
73439 170 23:56:40 3.481210E-03 1.824611E+04 1.824612E+04 1.333962E+16
73440 171 00:00:00 3.481093E-03 1.824611E+04 1.824612E+04 1.333962E+16
[code]....
i want to remove the lines starting by WRT and DEF.
I have a file that contains lines representing the nodes of a polyline but I only need the first point in each segment. With the following text:
0,"013A",0.57,260739.891,4379258.87
0,"013A",0.57,260737.674,4379258.94
0,"013A",0.57,260684.628,4379258.35
1,"013A",0.545,260769.915,4379257.84
1,"013A",0.545,260739.891,4379258.87
[Code]....
The problem with uniq is that the last two colums will differ. I don't care about the x/y for any points following the first one.
i need to count the number of files and put the output into a variable. i used wc -l filename but i couldnt find an option to put the output to variable. example if the number o line is 5, i need the output of echo $x is 5.
View 3 Replies View RelatedI have a txt file with couple of comment lines:
Number of title = !num!
#line1
#line2
#line3
I wrote a script with "sed" to replace !num! in this file, which is very straightforward. However, based on the !num!, I want to remove the number of "#" based on the !num! value. Is there an easy way to do that with "sed"; otherwise, i will have to write a script to loop through the file.
When i want to remove particular lines containing a specific word in from entire document at a time,i am using the following command.
awk '$columnno !~/specificword/' inputfile > outputfile
But here, coulmn no is my problem, because iam having this in different columns. So i need a solution for it.
How to write such removal command without mentioning column no. , ie irrespective of column no, it has to remove all lines having that specific word.
I need to create a single line of output from multiple and variable lines of input in a Linux bash shell script.
My input file looks like this:
Where there may be any number of umsecondaryphonenumber lines; if there is not a umsecondaryphonenumber line for a telephonenumber, I don't want to write any output.
So, the output file should look like:
The script I have so far is:
My question is - how do print each of the elements of an array in one record - i.e. what do I put in place of howdoiprintarray?
I have a text file that each contains either a domain or an IP, like this:
Code:
[me@server ~]# cat file1
122.foo.com
yahoo.com
23345229.com
[code]....
I want to remove all IPs in that file and keep others, so the result be like:
Code:
[me@server ~]# cat file2
122.foo.com
yahoo.com
23345229.com
[code]....
Is there any commands or scripts to remove only selected line in the history file.
View 1 Replies View Relatedanyone has ideas how to remove lone lines from a text file?
If I have a file that is like this:
-----------------------------------
line 1
[code]...
I want to be able to remove the first character of a line when I highlight multiple lines in gedit. Example:
%Example is
%Commented Code
%Uncomment using this shortcut
I would then highlight/select these lines, and remove the first character to make it look like this:
Example is
Commented Code
Uncomment using this shortcut
I'm pretty sure there is an actual shortcut for this. If there is another text editor on Linux that it would work in, it would be nice to know how to do it in that editor as well.
I have a CSV file that's created in an application that can't output lines longer than 250 characters. the data fields, all together, are longer than this. how would I remove the line break from every line that ends with a comma? For example:
A,B,C
D,E,
F
G,H,I
becomes:
A,B,C
D,E,F
G,H,I
Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
I have a WD external disk, NTFS file system. I mounted it on my Red Hat. While on the external disk, I deleted a directory, which was sent to .Trash-root of that disk.I went to .Trash-root and did rm -rf to completely delete that directory, but I got the following error: cannot remove `<directory>': Input/output errorWhen I do ls -la on that directory that I wish to remove, it shows me it has 0 files inside. But not only I can't remove it, I can't do anything else with it (copy, etc). And I have all the rights on that directory, so this isn't the problem.
View 6 Replies View RelatedHow to remove unwanted output that comes from executing system api?
for eg:when i execute system("telnet 127.0.0.1") i want the output to start with login and then password and then directly the command prompt,how can i remove the output that gets generated before showing the prompt?
10.10 Headless Server, CUPS queuing to HP Deskjet via socket correctly (?), but output regardless of file type only top three lines (NOT header or date or so)
View 1 Replies View RelatedIn Windows's CMD when you execute a command and then start writing the next one (while still executing the former one) the characters remain in the buffer and they all come up nicely to the new line once the previous command has been executed. In Ubuntu when I do this the newly typed characters annoyingly get in the beginning of the previous command's output lines. I don't really understand why isn't the default method as in Windows's CMD. I mean otherwise almost _everything_ sucks with it when compared to Unix/Linux shells/terminals (commands are longer, syntax is annoying, etc.) So I'd like to know how to do this in both Bash and Zsh.
View 1 Replies View RelatedHow can I remove all lines which contain A,,,,,, I tried the following sed statements but no luck.
Code:
sed "/A,,,,,,/d file"
sed "/A,,,,,,/d file"
I'm using Ubuntu 10.04.1.How to remove these lines in Nautilus
View 7 Replies View RelatedI am using 'sed -e /foo/d' to match lines which I want to delete from a file. I discovered I have some lines which contain random (extended?) characters like 'ủ' which I would also like to delete. The lines in the file should only contain alpha numeric characters.
View 8 Replies View RelatedI'm trying to search through some pdf files and I'm doing so by converting them to text files using pdftotext which is fine but I'm trying to get the number of occurrences in a paragraph of different words and it's adding a new line character at what it thinks is the right hand margin. I'm trying to remove all these singe new line characters but keep the doubles and I can't seem to work it out. i.e.
This is some text that has been broken.
Another paragraph.
becomes
This is some text that has been broken.
Another paragraph
I'm trying to come up with ideas for a simple way to strip a specific "entry" from a text file.I know tools like sed and perl can remove specific lines from a file but I haven't been able to come up with an elegant way to do my group of lines.In my file, the first "Location" line and the "SVNPath" line should be unique every time... but are they enough to strip out the whole set of the group plus the trailing one line of white space separating each group? Add to this, my file will grow as new entries are added (always appended to the end) but new entries will have the same formatting.
View 9 Replies View RelatedI am creating my own address book Python program and I want to create a nction that removes some specified entries. The code looks like this now.
Code:
def remove():
delentry= raw_input('Enter the entry name to delete: ')
[code]...