Programming :: BASH Sort List By End Of Line To X Position In Each Line?
Aug 18, 2010
I'm trying to make another file annotation script a little speedier than it has been by the up-until-now proven method of checking the last four characters in a filename before the "dot" (eg .jpg, .psd) against a list of known IPTC categories and Exiv2 command files. It occurred to me that if one script generated a list of files in directory foo, and the same or another script sorted that list by that four-letter tag,then that list could be used(instead of a for/do/done loop on the real files in the folder) by the command-file-matching script to "vomit out" which annotator file would go with file nastynewfile.jpg, f'r'instance. The script I had been using for this task looks like this:
Code:
while read 'line';
do
sp=$(echo $line)
vc=$(echo $sp | cut -d"," -f1)
cv=$(echo $sp | cut -d"," -f2)
[code]....
Where I seem to be stuck is with how to sort the lines in templist, which may be any number of different lengths, from back to front. sort -k looked promising, except it seems only to work the other way round. I thought of invoking a
Code:
q=$(expr length $line); echo $q
n=$[q-8]; echo $n
kind of thing, but that presented the problems of how to sort by those, how to tell sort where to find them (grep?) and how to "stitch them back in" to the original list, which is what I want to sort in the first place.
bash 3.1.17(2) I'm trying do write a shell script which must operate on each line of an ASCII text file. So, all the code must be inside a loop, and inside the loop, the first thing should be to read the next line from the file. I have the bash read command. But it reads from stdin. Any way to make read from a file?
I have two files, file1.traj and file2.traj. Both these files contain identical data and the data are arranged in same format in them. The first line of both files is a comment.
At line 7843 of both files there is a cartesian coordinate X, Y and Z ( three digits ). And at line 15685 there is another three digits. The number of lines in between two cartesian coordinates are 7841. And there are few hundreds of thousands of lines in a file.
What I need to do is copy the X Y Z coordinate (three digits) from file1.traj at line 7843 and paste into file2.traj at the same line number as in file1.traj. The next line will be 15685 from file1.traj and replace at line 15685 at file2.traj. And I dont want other lines (data) in file2.traj get altered. This sequence shall be going on until the end of the file. Means copy and substitude the selected lines from file1.traj into file2.traj.
I tried to use paste command but I cant do for specified line alone.
Here i showed the data format in the file. I used the line number for clarity purpose.
I want to access a file, and check the length of every line.After, i want to check and replace all lines with length over 10 characters, with a message.Does anyone have a clue on that?
I am trying to write a program in C which compares two files and prints the line that is equal.
Here file1.txt has
and file2.txt has
Note: file2.txt consist of only a single string where as file2.txt has multiple lines. Actually im comparing two files with md5sum values.
Here is the code but it compares only first line of files..but it should compare the whole file1..and sorry iam a beginner in C can any1 sujest some modification to this code so that..it can compare file2 with entire file1
How can I list the following with grep. I want to extract 2 lines fron a text file The fixed known part if it exists will static text and the text line after it will change.
A sample file . . textline1
[code]....
If the fixed part does Not exist how can I return error code 1
I would like to know how do I print the line # in a script. My requirement is, I have a script which is about ~5000 lines long. If there are any errors happen I just exit. And I would like to add the line # of the script where the error happened.
I simply want to read a file "data.txt" line by line Then char by char and add them into a result var. The file is supossed to always contain numeric values
I'm writing a mass snmp toner check which polls any toners available to be snmp polled, however when using a loop statement I get the results on different lines; which sounds good, however the tool I use to check with (nagios) ignores the new lines.
Is there any way I can get the output on one line? Also, I need to raise a fault if any of the toners are below a specific level (with nagios you raise faults with the exit code) - any way I can do this without exiting the loop. Code below with bits and bobs commented out.
I would like to delete a single line from a file that contains many lines passing through the same values as the two parameters. Again, I would like to delete a single line and not all those that contain parameters. How can I make bash?
I keep time sheet entries at work in an sqlite database called 'timesheet'. I have a shell script called 'today' which queries for all timesheet entries which are less than 24 hours old; it looks like this:
I would like to know how do I print the line # in a script. My requirement is, I have a script which is about ~5000 lines long. If there are any errors happen I just exit. And I would like to add the line # of the script where the error happened.
Below is the part of my bash script which uses awk to to fetch a line from a file. Choice is set by a case, and i know it is receiving a proper number because of the echo statement. The problem is with the syntax of the awk command it says the error is with one of the ', but when I run the command at the command line and replace "$choice" with a number it works properly. So I am not sure what is going on.
I need a command to search a string in a file and then to convert the next string in the same line from hexadecimal to binary. I was able to put everything in capitals. The original file can be as such:
E 2 C 1 794 T ffff E 2 C 1 787
It is not always FFFF! I am trying to do this in a file at once, not reading line by line (using while).
I've got a bash script I'm using to download a text file list of links via axel. What I'd like to do is automate the movement of completed links in the for loop when axel has successfully completed the download. This is what I've got. I can figure that I can just echo append the line to a new file, but what is the easiest way to delete the line with the link I just downloaded?
Code:
#!/bin/bash for i in $( cat $1); do axel --alternate --num-connections=6 $i export RC=$?
I have a project due for my Intro to C++ class and we are suppose to generate a file listing that will take an input of a C++ source code with .cpp extension and make a copy of it with a .lst extention that will have a line number preceding each and every line.
I wish to add information to one of my files based on matching IDs,
Here is an example
(the id is the 3 colunm)
(the ID is the 2 colunm)
And the output i wish to be
OUTPUT:
So as you can see the ones that do not match are still present, and the ones that do match just have the extra information from file2.txt added to them.
I thought about using join but that only seems to join the ones that match displays thoes only. i would like all the information in the output file.
At the moment I have a flat file which is being used by a few people. I want a script to remotely change the file, so I can start logging who is doing what.At this point here is one requirement I am trying to develop. We have text blocks who pretty much look like.I hope this is somewhat clear. I try to find $param for the right $workflow and change that. Can you help me to find $$var3 and change that?
I have two txt files containing x and y coordinates: xcoord.txt & ycoord.txt. I need to open them; read them line by line to get each coordinate; then each time I need to update Xs and Ys parameters inside another file called "dc.in" with the grabbed values.
Finally each time I need to run two exe files ( dc_2002 and st_vac) and produce corresponding output for each Xs and Ys ( dc.in is an input file for this exe files)
I have written the following code but it does not work:
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.