I am trying to write a script which compares a log file with reference file. The log file has a table, the LHS of the table are constants strings and RHS of table values changes if there are any changes in configuration.code...
Here I am looking for a script which compares test.log file ( whose RHS data-types are known prior whether it is digit or string) with test.Ref which is reference file for test.log file. It will be really helpful for me if any of you give some idea about writing this script.
I have a text file with 2 columns. Column A has 69,000 rows. Column B has 49,000 rows. Column A has our complete product list Column B has product list from Manufacturer 1 There are only certain/some rows which are common between 2 columns. and also, column B is not a subset of column A. Column A has extra entries and so does column B. I need to know, which rows from Column B, are common with Column A which rows from Column B are not common with Column A. Essentially I want to know from this list, how many of our products are from my manufacturer 1, how many does my manufacturer has which we dont carry.
How would I acheive this? My natural approach to solving this kind of obstacle is to reach for MS excel and use its lookup function, but its not working... Its taking forever and hanging up. since the file is so huge and probably my excel skills are really bad.
How can I do this from command line? I am looking for awk command if possible instead of sed since I am trying to pick up its syntax and usage etc. My thought process is, sort column A,B, for every row in A, lookup and output based on condition. Dont know if I am on the right track.
if there's a tab-delimited file under /usr/desktop, how can I determine the number of rows and columns of the file in shell?And, if told the the 3rd column of the file contains only numerical values and all values in the 5th column are unique, how can I verify these in shell?
I've searched everywhere and I can't come up with a good solution. For each line I need to find the average, min, and max. I've seen plenty of solutions where the number of columns is fixed, unfortunately for me these lines can get pretty large. My thought was to read each line individually into an array, loop through the array and find the avg, min, and max that way but i haven't had much luck. I can read each line using a while loop but I'm having trouble with the array part, or perhaps that's not the best solution?
I am newbie in openSUSE as still in transition from windows OS. How to display file properties (details other than size, date, permissions, owner, group,type) using file manager applications similar to Dolphin, Konquerer, etc. I have heaps of files ported from windows environment where I used to store descriptive subjects in one of the file properties field. In windows environment, I would have used file manager such as explorer to display the selected file properties field when searching for particular file before opening it. I know openoffice supports the file properties feature but unfortunately it would appear Linux OS currently does not have a file manager application ready to display these information.
I'm writing a script and I have doubts on how to assign values to an already established variable. The value for the vatriable would be coming from a file with three columns. I'm using the awk command for this. Am I doing it correctly? which of the following two ways is the better one or if both are wrong which one should I use?
Each line of the file I am sorting is in the following format:
<url> <month> <day>
For example:
[URL]
I wrote the following to sort:
Code:
#!/usr/bin/perl $in = shift; chomp($in);
[code]....
The script worked fine for my small testing files, but failed in my input file. The input file is 18MB and containing more than 300,000 lines. The output will contains some lines like that:
then I cnf=`ifconfig` thus giving me the config of the NICs.after that I want to compare the $cnf to see if the value of it is listed in the file and if it is do things.There might also be something better to use then the 'ifconfig'but it worked so I just stuck to it First I just had one subnet but now it's starting to grow and I wanna make a list instead of having them all listed in the if-statement.
I have a list of locked accounts, called lockedusers, how can I with a bash script compare it to /etc/passwd on the server and print them out if they match?
I am trying to compare a list of patterns from one file and grep them against another file and print out only the unique patterns. Unfortunately these files are so large that they have yet to run to completion. Here's the command that I used:
Code: grep -L -f file_one.txt file_two.txt > output.output Here's some example data:
how to program in bash, an i have a problem, i am trying compare values in between 2 values (from another file), so far my solution is to make a nested for loop, but that causes it to compare every value. Here is a visulization of what i want
file.a 2,3,4,5 file.b 3 5
[code]...
i want the values 2, 3, 4, 5 from file.a to be compared inbetween values 3 5, 6 9,1 2, 4 7 from file.b (var1 is the value im comparing, var2 is the less value, var 3 is the greater value)
for i in $var1 do for k in $var2 do
[code]....
my problem with the above code is it compares EVERYINNG, not the values inbetween what i want (which is 3 5, 6 9 etc).
I have an internal hard drive and an external hard drive, both with about 350 GB of data. The data came from the same source, but over the last couple of years, different people have moved files around to different directories, and some files have been deleted. Now I want to merge all the files onto the internal hard drive. I estimate that 80% of the files on the external hard drive are the same, so I don't want to copy 290+ GB of data over when I already have it.
Therefore, I need a way to find just the files on the external hard drive that don't already exist on the internal one. In other words, I need to create two lists of file names irrespective of directories and compare them, selecting only the file names that exist in one list OR the other. I've Googled for solutions but can't find anything suitable. There are ways to create text files of the file names and compare them with diff, but they have to be in the same order, and since these files are in vastly different directories, that won't work.
I need to get names of all installed packages in 2 machines and save them in 2 text files, then I want to compare these 2 files to know the differences between 2 files and from that I could know the differences between 2 machines. Is it possible to do that and what program I could use?
I'd like to extract a single column from 5 different files and put them gether in an output file. I saw a similar question for 2 input files, and the line of code workd very well, the code is:awk 'NR==FNR{a[NR]=$2; next} {print a[FNR], $2}' file1 file2I added the file3, file4 and file5 at the end, but it doesn't work. Does anyone know what do I have to do?
I want to merge columns (selectively) from several files and create a new file with the merge output. I saw some suggestions to use pr/paste to join the columns and then awk to pick-up the columns.
Code: pr -m -t -s file1 file2 | gawk '{print $4,$5,$6,$1}' But I have hundreds of files and I cannot manually pick up columns using awk as given in
I'm pretty sure this is super simple, but I've searched and searched and for the life of me I can't find any info on how to limit the columns that display in the interactive top program with arguments passed from the command line. I recall seeing something formatted like this ...
~/top -f (kdx)
Which would only show the %CPU, UID, program name fields/columns. I'd like to display the results in a really narrow terminal window on the right side of my desktop.
I'M working my way through Trainsignal's CompTIA Linux + Training course and I have a question about IRQs. According to the lesson using the command "cat /proc/interrupts", I need to memorize the system IRQs number columns 0-15. But when I use this command, I get somewhat of an unordered list, see below.
I am running the command on a Mac but due to it being a generic unix command and a command line query.. I thought I can write on this forum.. I am running the command
Code: df -h | grep '/dev/' I get Code: /dev/disk0s2 389Gi 62Gi 327Gi 16% / /dev/disk0s3 76Gi 24Gi 52Gi 32% /Volumes/Backup /dev/disk3s2 500Gi 47Gi 453Gi 10% /Volumes/Misc Note the huge space between the 1st and 2nd Column..
This is because currently I have some NAS drives mounted which are not showing due to grep. When they are not mounted. The output is fine with equal spaces between each column (like between col 2 and 3.. or 3 and 4). I want to do a (dare I say) sed or awk or something to reduce the space between 1st and 2nd col. So that it has space like between col 3 and 4.. or 2 and 3. This is because I am showing this output somewhere and because of the space its not showing up correctly.Also I hope the command will still work when the NAS drives (afp) are not mounted.. basically consistency. The spaces are not showing properly in the quote tag. Changed it to CODE tag.