Have this script which is reading in a series of files, one at a time with while-do-done loop, each file goes through various greps/awk's where this info is then saved to various files for later use. i.e....
Script is being run on Linux Red Hat,
In one of the grep/awk's the output (currently) are 2 columns (min max), i.e....| awk '{print $1, $2}' | sort -u which outputs (e.g.)
The number of "min max" pairs varies from file to file. Want to output a single column of unique numbers from the min max pairs & get the number of them for input to a file...i.e...
Where <PROCESS> is some process/technique that will generate a single column of integers (increment of 1) to pipe into the next one (sort -u)
i.e. (example from above)
Have tried command seq - only works for single pair input i.e.
Is there any command like seq etc which will output a single column based on a input of min max numbers (increment 1) to pipe onwards to next command?
am trying to find a proper regex to match the two numbers in the following log entry.
Code: 15:08:16.142 INF Found 64468 15:08:16.142 ERR [Uniform test code=64469]
Basically the pattern I'm looking for will match the two different numbers spanned across two lines.Thought I need to use multi-line mode as follow but this doesn't match on [URL]...
And I want to be able to pipe it to sort on that third column, by letter first, then number. But I keep coming getting files sorted like:
(field separations all start at same place, so columns are not jagged like above.)
I have read the sort man pages, and have tried -n for the numbers, and -k for the position to start sorting, among other things. I also tried inputting a second position to start sorting, which sort should supposedly refer to if the two entries are identical at the first place being compared, but it seems to just ignore the second one. I just can't get it to sort the numbers properly...
For now I am manually opening the file in emacs and changing them around, needless to say, very time consuming.
I would like to make a file with all these data in one column, like
a1 a2 . .
[code]....
Can it be done with awk or some other command? Also, is it possible then do add another column in front of this one with numbers of the lines (for every previous column), like
am writing a small search program for my class. I have decided to use indexing for my program. Ive researched online about indexing and how search engines do it. If im gonno do that I need to create inverted files to associate files to numbers ( numbers being the index of my paths ) . Now I was wondering what would be the best way to create an inverted file ? I was going to create sql tables using mysql api in C but then again there is no array data type or vectors to store few numbers in a single column in mysql and it is not advised to use Enum or SET
I want to calculate average and standard deviation. As first step I want to calculate average for the data than calculate (del=data - avg) for all the data.
I suppose get
Code:
For this I use AWK and the code goes like this
Code:
But I get different answers.
Code:
Why the answers are so different? since this is wrong I can not continue calculating the standard deviation.
I am trying find files in a directory that contain numbers. I have tried ls /etc *[0-9]* but that doesn't work. If I cd to /etc and run ls *[0-9]* it almost works but it also includes results from within files. My last thought was to try: find /etc [0-9] -type f but this does not work either. My second problem is that I am trying to get list of files in a directory that were changed less than 10 hours ago, using grep, while leaving out directories. I am completely stuck with the second problem.
does anyone know of a good site/book/guide to learn about linux web server administration? and also how do you find the your own nameserver numbers? would that just be the IP of my web server?networking isn't my forte, but i do intend to learn with this project.
I'm new to the linux world and its been quit sometime since i've done any programming. However, I'm writing a program which simply calculates the load average of a process. In doing this I need to use the uptime command for linux in a java program. I've done a little bit of searching on the net and it mentions this is possiable by using java runtime command. Unfortunatly though I have yet to find a working example of this. I've tried just simply reading the /proc/uptime file but I have no clue how to format the 2 numbers in seconds to make it the same as if you just typed in uptime in the linux command prompt.
Problem is simple but I can't figure out how to solve it, I tried any possible way that I know but with no result.I'm using simple perl script with DBI and do select from one table and do update in other table with results from select, but I can't preserve my '' returned from select when doing update. I simply want my '' from first table to be '' in second but postgres makes them real new lines. I tried to escape '' with , '',"",E(I mean E'value here') in front of value that updating but they are always real new lines not '' in new table.
#!/bin/bash ls -lhGg | while read line; do echo "$line"; done | awk ' { print $3" "$6 } '
what i want to do is be able to print column 3 and every column greater then 5. Has to be to the end of the line, since different filenames can have different amounts of words in them and the blank space is the separator. my current code works just fine if the file has no blank space.
I have a text field that is just list of servers and I need to add the word hostname in front of them... It must be brain fart but I can't think of how to do this. Basically I need this:
With tr '''' < file I can select all columns to become separate rows,but as you see x3 and x4 have to be grouped when transposing.Or should I use awk for this one?
There is a very conspicuous inaccuracy in the output of df. I should mention, that it was noticed to to a sudden change in the amount of space that was left on the backup partition. The df -h command produces the following output.
I have some code that opens a directory and reads in the names of files which are e.g. 0001, 0002, 0003 up to 9999I need to get all these numbers and then generate a new number that is not one of these numbers already.here is my code to check the files in the directory
I have a large tab delimited text file, about 17gb. It only has 6 column. On column number 4, it is all numbers. Ranging from 1-1000. I want to count how many times each number occured. So the output I want is in 2 columns, first one is a number, second column is how many times it occured. I tried
head -n 1000 coverage | cut -f 4 | uniq -c
Didn't work for me, the first column returned is not unique.
i've been using a awk script to calculate my data... i have 3 files:
file a1.txt:
2 3 4
[code]....
the results were (3.5, 6 and 3) which is pretty easy.. now i want to combine all this into 1 file and each have different columns and called it avg.txt which have something like this in the end:
I'm having problems adding up column totals using arrays. I've got it to add up the row totals and display at the end of the row. Here is my code so far
Code:
#include <stdio.h> #include <string.h> const int maxrows=10;
[code]...
What i need it to do is, add up the columns and display it at the bottom of each column similar to how the row totals display
I need to extract the Info from the RC column for the first 4 players of liverpool. The test code i have does the same,but can anyone show me a better way of doing it.I could do it easily with gawk -F"|" and print the respective column,but i need to do this in perl.