I have a PHP script written that is checking a string to see if it contains a link in it (i.e. a URL). I have the following if statement, that uses 3 possible regular expressions to determine if there is a link or not.
Code:
// check if we found a link
// links are denoted by strings that:
// - contain http://
// - contain www.*.*
[Code]....
I'm not convinced yet that writing a shell script to do this is the best course of action. If someone is capable of doing this with a Perl or a Python script that's fine too. If you want to make it super high performance and write it in assembly
I am trying to develop a method of reading files generated by other programs. I am trying to find the most versatile approach. I have been trying bash, and have been making good progress with sed, however I was wondering if there was a "standard" approach to this sort of thing. The main features I would like to implement concern reading finding strings based on various forms of context and storing them to variables and/or arrays. Here are the most general tasks:
a) Read the first word(or floating point) that comes after a given string (solved in another thread)
b) Read the nth line after a given string
c) Read all text between two given strings
d) Save the output of task a), task b) or task c) (above) into an array if the "given string(s)" is/are not unique.
e)Read text between two non-unique strings i.e. text between the nth occurrence of string1 and the mth occurrence of string2
As far as I can tell, those five scripts should be able to parse just about any text pattern. I am by no means fluent in these languages. But I could use a starting point. My main concern is speed. I intend to use these scripts in a program that reads and writes hundreds of input and output files--each with a different value of some parameter(s).
The files will most likely be no more than a few dozen lines, but I can think of some applications that could generate a few hundred lines. I have the input file generator down pretty well. Parsing the output is quite a bit trickier. And, of course, the option for parallelization will be very desirable for many practical applications.
I want to write simple firefox extension / script or anything to change URLs from HTTP to HTTPS for selected websites (e.g. facebook). That thing is actually bypassing some security checks in my network.
Can anyone tell me how to proceed? I can deal with language as far its C++ or Python (something else would just take more time that's all )
I want to search and replace strings in a file with strings in other files/i need to do it with big strings(string1 is big) and i want to use a txt file for this.But this code not working :
I have recently merged two Joomla 1.0 sites I ran into one. I imported the articles I wanted to keep to the new site, and I have the old site's domain pointing as an alias at the new site. The new site is www.theouthousers.com . The old site was www.bludblood.com .
I also have the core SEF URLs on, using the htaccess.txt file that came with Joomla.
I have one writer for the old site who linked to his articles in various places, so I am trying to set up redirects for him so that he doesn't have to change all of his links.
To redirect to the equivalent location on the new site:
[url]
And I also need specific links like:
[url]
To redirect to their new counterparts:
[url]
Keeping in mind that www.bludblood.com is now an alias of www.theouthousers.com, is there any way to do this? I have been trying with rewrite rules and redirects, and cannot seem to achieve the desired effect.
Tried various versions of:
Code: Redirect [url] [url]
With the http, without, as regexps, as 301s, as permanents, etc, and it just will not work. Also tried as RewriteRule.
Here's a challenge I've been struggling for months with:
I have a bash script that reads URL addresses of our internal server and then executes some test commands on them. Something like this:
Code: read -p "Enter URL: " url sh execute-what-ever-to $url
After copy-pasting the URL the user taps the enter key and the script proceeds, but here comes the tricky part: I want this to work without the need to press the enter key after copy-pasting the URL.
"read -n" does not work in this case, as the URLs vary greatly in length. However, the URLs always end to the same string. They could be like "http://url1/END", "http://url2/END" and so on. So this ending string "END" could be theoretically used to recognize that the whole URL has been pasted.
I'm using wget to retrieve a long list of URLs, a small proportion of which fail, hence:
Code: wget --input-file=urls.txt Is there a way to log the urls that have failed? Unfortunatley wget does not output the current URL being processed (and then the status), so hard to see grepping the output helping.
Or should I use some alternative like curl, wmget?
I am having a lot of problems trying to change one string by another using sed: the sentence is like this:
sed -i 's/KERNEL=="tty[A-Z]*", NAME="%k", GROUP="uucp", MODE="0660"/KERNEL=="tty[A-Z]*", NAME="%k", GROUP="uucp", MODE="0666"/g' 50-udev.rules it is just to fing the line with: KERNEL=="tty[A-Z]*", NAME="%k", GROUP="uucp", MODE="0660"
I wrote this small program that will truncate a string that's entered in by the user.An example of its usage:if the user enters in a string say "abcdefghijklmnopqrstuvwxyz" the program will only take the first 9 characters and truncate the rest so that the user can be prompted for a second string and not be worried about remaining characters left in the stream.Now this program works O.K. but I would like to find something in C that has this functionality build into it...Does anyone know of any function that will accomplish this.
I am trying to replace a section of a file between the first instances of the strings {}, with the contents of another file. Example of the format of the file I'm trying to modify
I use udhcp with some of my minimal installs. I've messed around with the code a bit when it wasn't working correctly - a few years ago. I will find time - I hope soonish - to figure out how to do a few other things with it.
For now though, I'm using this string to grab my ip after startup
I realize I could substitute ifup -a but I'm more interested in figuring out how to make ifup wait for the ip to become available if it is not available yet.
Never mind that one, just typing out the question answered it for me, when I find it in the scripting man ' ; : " & =
Or if there are any other suggestions for better construction of the string.
what is the best command to use to parse strings?I have a variable $str and need to parse this string.Can you provide an example of the command used to get a substring of $str based on the index values of start and end
i have an sql table with 2 columns i run a script that randomly selects a word from the table in column 1. the word is displayed on the screen and I guess what it means i concatenate the randomly selected word and the answer the script looks for a match in mysql if it finds a match it says "Good job!" if there is no match it will say "not correct". However when i get it right it says not correct even though when i echo the variables they look exactly the same. the script below:
#!/bin/bash var=$(mysql translator -u root --password=*-N<<EOF SELECT word FROM tagalog ORDER BY RAND() LIMIT 1 EOF )
I have a function that retrives text between title and links tags from an XML file, but what i want is to test if the title and link tags are between item tags. This is my code:
I am trying a search for a pattern in the file. I can have any character in the pattern. I am pretty sure I will have $, ", ', ^, ` etc., The Problem I am facing is if I use "" (double quotes) to enclose the pattern, it gives special meaning to $, ^ and " within the string. I have no control over the pattern input. I am getting it from some other file. On the other hand, If I use '' (single quotes) to enclose the pattern, it gives special meaning to the ' (apostrophe) within the string and terminates the pattern prematurely. How do I disable the special meaning these characters have? For example, in perl, I could enclose the pattern within Q and E. Is there an equivalent in grep pattern expression? I could find one in the man page of grep. Is there a solution to this problem?
I need to make a daemon which listens to port 81 for messages like [URL] So far I made a daemon which serves as a simple stream server: I set up a socket to listen to a non-reserved port (like 9999), but I don't know how to read the query strings.
I have an interpteter that supports string literals, and the way it works is that the lexer returns the entire string as a single token, with the quotes removed and escape sequences replaced with the literal characters they represent.
I already implemented single-quote strings, they don't interpret any characters specially except for the single quote. I partially implemented double-quoted strings, they already support all the same backslash escape sequences that C does. But I would also want to add variable substitution.
The way it would work is that "${expression}" would interpret the expression (which could just be a variable name) and replace itself with the result. But I have no idea how to do this.
In case it matters, I'm using a hand-written lexer and recursive-descent parser.
I am no expert in loops and it took me all day to write that. I couldn't really tell how to match the string in $df_file and $fs_share, so I did a little workaround with a count.
I was zsync-ing the latest Ubuntu 11.10 Alpha and thought I'd make a little GUI for it as a small project. The gui is set up, I just need to figure out how to run zsync with content from to variables, cto and cfrom. I tried the following code:
Say I have a text file with10 columns. I need to reorder them based on a list of column numbers that will reorder them.
My problem is this:
If I want to cut out 5 columns (columns 1,2,3,9,10) in the order 1,10,2,9,3 then I have tried using:
Code: cut -f1,10,2,9,3 my_file.txt > reordered_file.txt But this just extracts the columns in order as if I used:
Code: cut -f1,2,3,9,10 my_file.txt > reordered_file.txt How can I cut these columns and place them into the new file in the order I specify?
While this might seem quite trivial, I will actually need to do this for a file containing ~14000 columns with ~12000 columns that I need to extract in a particular order.
...and returning the index of the found element in its array.
I have:
for ((i=0; i < ${#array1[@]}; i++)); do # Read each line of the file test if [[ $(eval "sed -n '$(($i+1))'p test") == *${array2[0]}* ]] stuff
I want to find the index of the found substring in array2 and only if it isn't found, move on to the next element of array2. I don't know the size of array2 so that [0] has just got to go.
I have a directory of orchestral music .ogg files from a family member. Each track is from a different artist and the CDDB entry adds a ":" character after the artist name in the track title.
I would like to parse file names in any given directory and search for the string Code: Select all: and replace it with Code: Select all_ According to this post on stackoverflow, I can use Perl to accomplish this task. I've tried Code: Select allperl -i.bak -pe 's/:/_/' but since I am still learning Perl I'm probably commiting a PEBKAC error.
How would I go about solving this issue with regular expressions using Perl?