Programming :: Bash: Read A Text File Line By Line?
Jul 7, 2011
bash 3.1.17(2) I'm trying do write a shell script which must operate on each line of an ASCII text file. So, all the code must be inside a loop, and inside the loop, the first thing should be to read the next line from the file. I have the bash read command. But it reads from stdin. Any way to make read from a file?
I have two txt files containing x and y coordinates: xcoord.txt & ycoord.txt. I need to open them; read them line by line to get each coordinate; then each time I need to update Xs and Ys parameters inside another file called "dc.in" with the grabbed values.
Finally each time I need to run two exe files ( dc_2002 and st_vac) and produce corresponding output for each Xs and Ys ( dc.in is an input file for this exe files)
I have written the following code but it does not work:
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
I need a qtimer to trigger reading of a file line by line, I have the code sort of running with the timer trigger but qtimer will just read the first line over and over as it is now.
when I ran both ldapsearch commands invidually, they work fine. But when I ran script, I got first file correctly but not the second one. It looks like its not reading the first file correctly and not setting the variable ($userdn) value correctly in the second ldapseach command. I want read first file first line and run the second ldapsearch and continues, then read the second line..and so on.
I want to access a file, and check the length of every line.After, i want to check and replace all lines with length over 10 characters, with a message.Does anyone have a clue on that?
Trying to create a small script that will read user's input, test if user entered some input and if not display some message or display a text using user's input.
The script is the following but i get an error saying "[: 6: =: argument expected"
I have to delete a certain line of text from the a textfile via ubuntu's shell scripting.I have done research, and it seems that most people advocate the usage of sed /d option. sed makes does not edit the text file. Hence, most options I discovered involved the use of a temporary variable/textfile and then overwriting the old file with the temporary new file. Is there anyway whereby I can bypass the use of temporary storage containers? I hope there is any magical combination of commands to edit the file directly.
I'm a bit new to Python programming and hoped that someone might be able to help with a problem I'm having. What I essentially want to do is to combine two text files line for line. I know how to do this in a bash script so to give you a better idea here's the code for that:
Code:
This is basically for adding on values to the end of a CSV file that uses ';' as the delimiter. So say file1 said:
And file2 said:
Then running this command would create merged_file1_and_file2 which would be:
The code I'm using at the moment is:
Code:
As I'm sure any experienced python programmer will see, this prints out the first line of the file "csvraw" and then all of the lines of "stamps" and then the remainder of "csvraw".
What I'd like to do is something like: (pseudo code, I know it's not python ;-))
Code:
Is this possible? I've tried googling and my Python Pocket Reference hasn't been much help. I've looked at pickling but that doesn't seem appropriate.
I just learn perl script.May i know how to simplify the code below especially in the red color part? i saw some examples in internet, they use "next" command.
a sed command to add a text before line number in text file? I have text file with 500 lines, and i want to add 3 more lines with text after line 300, OR before line 302, isn't no problem.
I'm trying to make another file annotation script a little speedier than it has been by the up-until-now proven method of checking the last four characters in a filename before the "dot" (eg .jpg, .psd) against a list of known IPTC categories and Exiv2 command files. It occurred to me that if one script generated a list of files in directory foo, and the same or another script sorted that list by that four-letter tag,then that list could be used(instead of a for/do/done loop on the real files in the folder) by the command-file-matching script to "vomit out" which annotator file would go with file nastynewfile.jpg, f'r'instance. The script I had been using for this task looks like this:
Code:
while read 'line'; do sp=$(echo $line) vc=$(echo $sp | cut -d"," -f1) cv=$(echo $sp | cut -d"," -f2)
[code]....
Where I seem to be stuck is with how to sort the lines in templist, which may be any number of different lengths, from back to front. sort -k looked promising, except it seems only to work the other way round. I thought of invoking a
Code:
q=$(expr length $line); echo $q n=$[q-8]; echo $n
kind of thing, but that presented the problems of how to sort by those, how to tell sort where to find them (grep?) and how to "stitch them back in" to the original list, which is what I want to sort in the first place.
have been playing around with a script for a few hours and now I need to be able to output the lines in a text file one by one to be used later in the script.What it gonna do is to read a log file and grep the usernames, then write them to a file, and then run one script for each user, to search for more information about them in the log.But I don't know how to output a single line from a file, and google does not return any solution.
i am trying to read in a file 1 line at a time and for some reason it stops printing out at about line 62,000.
i am doing this: Code: while(fgets(c0, 1085, fstream0) != NULL)
but after about 62,0000 lines it stops printing. no seg-fault, no core dump. it just stops printing to the terminal then returns me to the command line after a couple of minutes. as a hack i am doing split -l 50000 on the input and calling my program 5 times.is there some limitation on fgets that i am not understanding ?
I'm writing a program which now accepts user input:
Code: echo "Enter a date in the format YYYY MM DD hh mm ss."
read gregyr gregmo gregdy greghr gregmn gregsc This lets the user input a date and time, such as 2011 06 21 15 12 45, and have each number assigned to their corresponding variable. Later in the program, these variables are put into an equation, and then the terminal spits out the answer. Now I have to have the program read all of the lines from a text file, which is in this format, assign the variables.
I'm trying to add text to a file for a specific group of users, I'll need to do examples as I can't think of an easy way of explaining, my file is like this:
Code:
users{ user1 user2
[code]....
At the present my code lists all the available groups, how would I add a user to a specified group? (e.g add "members user3") to the end of group 1 for example. So the code ends up like this
Say I have a text file like: Code: 1 3 4 How would I use ksh to put the number '2' into the second line of that file?Okay it's not bash, it's ksh because this computer is OpenBSD
I have a text file called namelist.wps. In this file there is a line that reads:
Code: start_date = '2010-12-26_12:00:00', '2010-12-26_12:00:00', I have to automatically update the year, month, and day of month for this line without changing the rest of the file. Here is the script that I have:
For example, I have a text file with data which lists numerical values from two separate individuals
Code: Person A 100 200 300 400 500 600 700 800 900 1000 1100 1200
Person B 1200 1100 1000 900 800 700 600 500 400 300 200 100
How would I go about reading the values for each Person, then being able to perform mathematical equations for each Person (finding the sum for example)?
I ran into it while google Segmentation Fault. I'm writing a simple C program that reads a file that counts each line and numbers it then writes to a file called sdout. I copyed my program mostly from the text book but im still having problems. Heres my code:
instead of importing a file I would like to use the variable $x I tried using pipes, but with no luck. My goal is to read one line at a time, but not have to export my data to another file, I would like to keep it all within one script.
I have a text file called namelist.wps. In this file there is a line that reads:Code: start_date = '2010-12-26_12:00:00', '2010-12-26_12:00:00', I have to automatically update the year, month, and day of month. I set values for the year, month, and day of month using the following code in a c-shell script:Code: set y1 = `date +%Y`set m1 = `date +%m`set d1 = `date +%d` After I do this, how do I update year, month, and day of month, without changing any of the other lines in the namelist.wps file?