General :: Text Manipulation Find / Replace Variable Efficiency?
Aug 27, 2010
What I have works, but wondering what is the 'right' way to replace the digits with the letters given in this loop? somehow use a case or multiple sed? i thought of a multiple sed or a case but couldn't get it to work
Code:
# ...
bcv=$(echo $line | awk -F" " '{ print $1 }' | sed 's/1/q/g;s/2/w/g;s/3/e/g') # and so on
Code:
while read line
do
bcv=$(echo $line | awk -F" " '{ print $1 }')
if [ $bcv == "" ]
I am trying to write a Perl script that can open a file, find text that appears between two identifying strings (for now, "start" and "end"), then modify that text by enclosing it between "term_" and "_term" . Since the identified strings vary, the replacement string becomes "term_$1_term". From looking at other threads in this forum I've been able to get as far as spitting out the modified terms using the following code:
open FILE, "start2.txt" || die ("Could not open file <br> $!"); $text = <FILE>; while ($text=~ s/start (.*?) end//) {
[code]....
The problem is how to get "term_$1_term" into the file in the same while loop, which I'm guessing would be some of variant of "$text=~ s/$1/$term/;" (which doesn't work as it stands).
I have a many directories each with about 20 html files inside. All the files have .html ext. What I'm hoping is possible is from command line to find some text in each one and replace it with some other text.
Basically what I want to replace is;
/awstats/ with awstats/
I can do this easily with dreamweaver or some other application but because I have 960 pages total to do I'm hoping to do it this way.
I have an SQL dump, file.sql that has many references to a particular domain, d1.com. I would like to run a command that can replace every occurrence of d1.com with d2.com. I've tried looking into sed before but the man pages are quite daunting.
This is what I have right now. Well, I thought I knew sed, and apparently I don't... I tried writing this for someone else, and this has given me trouble, so since the user pretty much figured it out on his own, here it goes. Say VARR=1, so VARX and VARY contain the above text, appended by 1. What I am trying to do is replace the text "defaults.ctl.card 0" by VARX and "defaults.pcm.card 0" by VARY. The contents of FILE1 is the file being used to search for both text fields, and FILE2 is the output file. I tried using single quotes, double quotes, and a mixture of both, and no go whatsoever. So my question... What is the proper way of searching for text within a file and replacing with a variable?
I have a number of text files (26 per database x 100+ databases) which need 'correcting' in order to import into postgresql. I think that I have identified all the problem characters and I need to automate the process as much as possible. I have a script to convert the characters and I do them one by one (not effecient but easier to understand).
When I grep kernel.exec-shield I get both line, hence I keep over writing the kernel.exec-shield-randomize in my script because it finds them both for my sed commend.
How can I get an exact match with either sed/awk/grep in shell so I can do a find and replace?
Example: sed 's/^kernel.exec-shield =.*/kernel.exec-shield = 1/g' /etc/sysctl.conf will replace BOTH lines
Example: grep "^kernel.exec-shield" find both line and I want it to find only the exact line.
I'm having problems with Tomboy. I have a few hundred note files and I need to go through all of them and replace all instances of "<link:broken>a</link:broken>" with "a". Is there a bash command I can use to do this?
I'm pretty sure this is doable from the command line, but my CLI skills have degraded a lot since my pre-Y2K admin days. The goal is to search all the files in the directory for a very long string of text and replace it with another string of text. The text being searched for is my Google Adsense code (which will be stripped from my website) and it will be replaced with a placeholder so I can easily tack something else in there in the future.
Seeing how I have that long snip of code on about 100 pages, automating the process would make life easier. If I was searching for a single word, I can see ways to do this. If I paste the code I'm searching for into a text file, is there a way to: find (contents of oldstring.txt) and replace with (contents of newstring.txt)?
i use this script to get the time and date of back and fourth transactions for a particular execution id. I use a substr command on the 5th column to to cut the milli seconds off the time value. - otherwise the times would look like 08:30:04.235
Are there some good tutorials or reference materials on how do pattern matching and text manipulation in Linux?I have a few simple tasks I'd like taken care of...like formatting numbers in file names, stripping some text from directory names, etc
I have a little problem about string in Snort alerting. I understood about Snort alerting saved in /var/log/snort/alert and Snort will add a new entry if there was a attack from anywhere. Then here's my problem. Because it has a lot of file on it, all I want to do is parse that string in snort alert then make into simply log files with it. I'm getting confused with Snort alert and parse that file.
Here's the simple algorithm; Snort get the alert <- parse the alert with my parameter which I've configured with bash (ip address, dest, kind of attack and time) <- then sent that parse alert into new text (let's called snortsent.txt) <- after ten alerts then clear the text then waiting again until the Snort alert go on -> back to snort alert. Here's the sample of my snort alert: (/var/log/snort/alert)
The problem I have is that I need to replace a more complex string, like this: Old string: /mnt/stor6-wc2-dfw1/627896/982574/ New string: /mnt/stor8-wc2-dfw1/369587/302589/ There I don't know how to do it... since the / is what separates the old from the new strings, and the strings that I want to replace have / in it. Also, I would like to know how to specify under what folder replace the files, for example, I want that it search/replaces all files under /var/www/mysite/htdocs folder.
I need to do some text file manipulation which I think should be done with standard commands in BASH. I'm looking at comma seperated text files (stock market data). It comes in the form of date, stock code, open, high, low, close, volume. What I need to do first is move all data with same stock code sequentially into individual files.
While doing this since the stock code will now be the file name I need to remove the stock code. Next I need to filter out overlapping data from different files with the same date. ie. where two files contain the same date on the one line only one line will be added to the combined file. I think there must be a tutorial out there for basic text manipulation like this, I just haven't found it yet.
I want to underline my user & hostname in the PS1 variable, but I cant seem to do it.
I know that my Xterm can underline text normally, because when I pass Xterm the '+nul' option, then do this:
Code:
It underlines. I just cant seem to get it to work for my PS1 variable, although its quite possible Im doing it incorrectly. has anyone managed to do this, is it possible?
i am on processing text tasks And i found that if you assign a text to a variable is chomp'ed automatically the newline
Code:
variable=$(cat file.txt)
The problem is i can only access the items/lines using:
Code:
for line in $variable do echo $line # Other commands done
how do i convert this to an indexed array. More importantly, how do i get access to individual $line[0], ..., $line[n] Another thing, if the file.txt, has lines with spaces it is a mess using the for...in..., but echoing prints line by line...o_0
I have large text files with space delimited strings (2-5). The strings can contain "'" or "-". I'd like to replace say the second space with a pipe. What's the best way to go? Using sed I was thinking of this:
I need some help in determining how to have a color splash image display in place of the Linux scrolling-text during the boot-up process on an embedded Linux device. The kernel used is a stripped-down version of Linux (kernel 2.6.29), which has been custom configured. I am using syslinux as the bootloader. I was told that Plymouth might be the way to go with this, but I'm not sure.
on creating a new perl script which replace IP address from the text file. eg. If in a file, we found any word like 11.222.333.44 then it has to be replaced to XX.XXX.333.44
For example if you want to create an alias in Linux with a message echoed into the variable would the following command be; alias hello="(echo)"Hello." "? I'm trying to learn some environment variables and aliases.
Is there a way to specify to find that I only want text files (and not binary files)? Grep has an option to exclude binary files, so I thought find probably has a similar feature, but I've been unable to find it.