General :: Sed - Replacing Only Text With Several Specific Lines Excluded
Jun 17, 2010
As much as I didn't want to ask a sed question, especially considering there's already one on this page I've looked as best I could and cant find the solution. Id like to use sed to replace occurrences of a pattern but exclude two or 3 specific lines that are not consecutive. For example I know with 1,10 i could exclude the first 10 lines, what is the syntax if I just wanted to exclude line 3 and 7. The sed command I'm working with right now is for rearranging Ethernets.
cat /etc/udev/rules.d/70-persistent-net.rules | sed -e '/'"$found1fullmac"'/!s/eth1/'"found1eth"'/' > /etc/udev/rules.d/70-persistent-net.rules
I would like to replace $found1fullmac with two variables representing line numbers to exclude from the replacement.
I have a String like "A.words=Ajay,Anil" in file A.And it contains a lot of other information also. I wanted to replace "Ajay,Anil" with "Vijay,Vinay" with sed command with using existing file only(not using another file)
Consider a situation in which you want to display only specific lines of contents from a file or of a command's output. Yes, we have head and tail commands. But, how to view all the lines of a file except the last one or vise versa when we don't know the count of lines in advance?
Here, I don't want the last line (in italic) to be included in the result since the last line is due to "grep bash" in the devised command "ps au | grep bash". Well, we can rewrite the devised command:
Quote:
"ps au | grep bash | head -n 2"
But, again, here we are specifying the count of lines to be included. But, in the presented problem we don't know any count in advance!
I used a script that renamed my file eg 'echo webutil.olb | tr [A-Z] [a-z]' i wanted to rename it back to webutil.olb. How do i do this for many other files that i have
I would like to replace 'xxxx' with 'yyyy' which is in a file xyz.csproj not sure of what 'xxxx' is, it can be 3055, 4056, 7089 etc. I know it always appears at line # 5 and at character 50.
A function by name abc is called in many files. I want to copy all the lines with the function call to an output file.A simple grep on function name doesn't help me as the function call is spanning across multiple lines as follows:
abc(parameter1, parameter2, parameter3);
So I want to copy all the three lines (till semicolon) to the output file.The problem is because there are more than 200 calls for the same function and I cannot do it manually
Is there a way, besides writing a PERL program, to read each line one by one in file A and tell if this line also exists in file B? Can this be done via a shell script?
Is there anyway to delete certain paragraphs within a text file and then insert the paragraph into another text file.I just cannot figure out how to remove the specific lines from the file and then insert them into another file at a certain line within that new file. Thanks again
i am on processing text tasks And i found that if you assign a text to a variable is chomp'ed automatically the newline
Code:
variable=$(cat file.txt)
The problem is i can only access the items/lines using:
Code:
for line in $variable do echo $line # Other commands done
how do i convert this to an indexed array. More importantly, how do i get access to individual $line[0], ..., $line[n] Another thing, if the file.txt, has lines with spaces it is a mess using the for...in..., but echoing prints line by line...o_0
I need to insert 3-4 lines of text to the beginning of a text file. The file is a largish MYSQL dump, the result of a backup shell script. This shell script should insert the required text.I've wrestled with sed, but lost.
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
I want to scroll back 10000+ lines in text mode linux terminal. As there is an unlimit option in gnome-terminal, so I guess if this is also possible in text mode?
I am facing a problem while splitting a text file, I need to split a file into some parts and each split file should have 2000 lines, when I do it through "split" command the mother file is kept intact but as per my requirement I need to cut mother file into some parts thus it should not be kept intact.
Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
I am a member of a group which has written a program whose source code is being held in a specific directory (~cs252/Assignments/basicAsst/project) and we want to go through and change the parameters for the function "sequentialInsert." My job is to find all occurances of the function call to "sequentialInsert" and to also list the files from where the code came from. Also, I have to be in the commandsAsst directory when I do this. I have tried grep and find combined together, and I am at a lost.
I am attempting to use the zip command with the '-x' option to exclude a folder e.g. 'zip upload.zip public_html -x public_html/jquery/*'. However, parts of this folder are still being added to the archive. I made a shell script (saved as 'compress.sh' and ran as '. compress.sh') to do the archiving so I could test adding nested wildards for multiple subfolder levels.
Code:
#!/bin/bash rm -f upload.zip zip -r upload.zip public_html -x public_html/jquery
[code]....
Each new line I added here that has the nested wildcards made the archive file size a bit smaller. Adding more /*'s than this didn't affect the file size. Even after all this though, there were still a couple megabytes of files and folders from the 'jquery' directory that were added to the archive.
Here's some examples of files and folders that were created after I unzipped the archive: public_html/jquery/js/tablesorter/addons/pager/icons [folder] public_html/jquery/js/tablesorter/addons/pager/.svn/entries [file] public_html/jquery/js/tablesorter/build/.svn/text-base/js.jar.svn-base [file]
Why is it that despite all the -x lines, the files and folders like these were still being added to the archive? How can I simply recursively exclude the entire public_html/jquery folder from the archive?
To save on the writing of WAY to many files with very little in them, I want to put it all in one file and read a specific few lines. There will be six variables to be read at a time. Format is as such:
//Set 1 string name 5 12
[code]....
From name to 5th number is a set. The name will be of different lengths for each set. This will be a big file of probably 40+ sets. My problem lies in reading one and only one set be it set 5 or set 34. It needs to be done in C++.
I'm trying to come up with ideas for a simple way to strip a specific "entry" from a text file.I know tools like sed and perl can remove specific lines from a file but I haven't been able to come up with an elegant way to do my group of lines.In my file, the first "Location" line and the "SVNPath" line should be unique every time... but are they enough to strip out the whole set of the group plus the trailing one line of white space separating each group? Add to this, my file will grow as new entries are added (always appended to the end) but new entries will have the same formatting.