General :: Compare Files And Pull Out Required Data?
Nov 10, 2010
I have 2 files to compare and then print out information that match a certain pattern. I know basic scripting and was heading down the path of merging the 2 files together but this is the wrong approach. Would really appreciate a script that can do what is required:
file 1 contains dates, times and ID's:
2010-10-28 10:42 5939697357
2010-10-28 11:56 5919543491
What is the best and simplest way to compare two directory structures without actually comparing the data in files. This works fine: diff -qr dir1 dir2 But it's really slow because it's comparing files too. Is there a switch for diff or another simple cli tool to do this?
I have some data ( seperated by semicolon ) with close to 240 rows in a text file temp1. temp2.txt stores 204 rows of data ( seperated by semicolon ). I want to : Sort the data in both files by field1.i.e first data field in every row. compare the data in both files and print out the rows that are not equal in seperate files. I was trying to do this with excel using vlookup, without a great deal of success. hence, i'm exploring the shell script option.
I want to compare two files in perl, I have two files file1.txt & file2.txt. if column1 on file2.txt match column1 on file1.txt then I want my result on file3.txt (column1 column2 file1.txt + column1 column2 column3 file2.txt). this problem was solved with "awk" but I want to do in perl.
Here i want to compare these two files with diff command and need to take the common things in a separate file. File contents are 7 strings in length and everything will start with the character e. File 3 contents should be
I want to compare 2 name files, for instance. I got the package foo1.tgz and the foo2.tgz. and I want a script in bash that detects foo2 is newer than foo1 and delete foo1. Can it be done for managing collection of slackware packages.
what command is to be used to call strings from other files to the script and then comparing strings from two different files in the script to check if strings are matched then return the result to another script.
I have an external 3.5" USB 250Gb HDD which is showing symptoms of hardware problems (repeated /var/log/messages errors of "reset high speed USB device using ehci_hcd"). This was originally plugged in to my NSLU2 running Debian Etch. I have just installed Ubuntu Desktop 9.10 to a spare Pentium-3M laptop and was hoping to copy the contents of this HDD to a fresh drive. However, I cannot mount it even read-only; mount -o ro /dev/sde3 /mnt/disk fails, and the /var/log/messages error is "recovery required on readonly filesystem", "write access unavailable, cannot proceed". I cannot understand why mounting a disk read-only should require write access. Following advice I googled elsewhere, I tried running mke2fs -n /dev/sde3 to try to list the alternative superblocks - but once again I got the error that the device was read-only. How can I go about accessing the data on this disk?
I'm working with Radiotap headers right now. I want to get the RSSI data. I came through a problem that I can't figure out right now.The value that I need to get is:
Code: s8 IEEE80211_RADIOTAP_DBM_ANTSIGNAL now, when I printf it:
and I get this error simply running the program from the command line: DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1 at ./submit.cgi line 24.
is this the right syntax to use, both, for this line:
my (@param) = $cgi->param("firstname","lastname","type") ;
as well as this one:
$sth=$dbh->prepare ("SELECT firstname,lastname,type FROM dts WHERE firstname LIKE $param[0] AND lastname LIKE $param[1] and type LIKE $param[2]" );
or should there be quotes around the $param[0] or something? (also is it $param[0] or $param(0)?)
I am using the diff command with the -r option, to compare a large number of files and files in subdirectories. My main interest is to find out which files have been changed, and not what the actual changes are, and since a lot of files has been changed, it would be a lot easier to view the file names only. Is there and option for diff that might do this, or does there exist a similar tool/command that could do the job?
I have a script in Linux that generates a report on my Linux machine. I would like to know if there is anyway I can write a script to pull this report automatically from my Linux machine into my windows machine. Both Linux and Windows machines are on the same network
I have a gziped dd image backup of my entire hda drive (osx and ext4 partitions) that I created with the following command: dd if=/dev/sda |gzip > image032810.gz Can I mount this image file to pull off individual files from individual partitions?
I am trying to figure out a way to pull http links out of text files and then output the results in a log. The text files are in folders like this inside a source directory.
I have two files (not sorted) and need to compare line by line (i.e. first line of file1 to be compared to all the lines of file2 and so as for the rest of file1). Output will be an array of length of file2. Any suggestion in BASH other than a grep inside two read line loops ( which is time consuming for files ~1000s of lines).
I have to write a script that accepts two directory names (JIIT, JUIT) as positional parameters and checks which files are identical in both directories and files having same contents are also considered as identical in same directory. I tried using diff:
#both directories contain three files...file1, file2, file3 echo "Enter the directories:" read d1 read d2 cd $d1
if diff file1.sh file2.sh > /dev/null then echo same 1,2 else echo different fi
if diff file1.sh file3.sh > /dev/null then echo same 1,3 else echo different fi
if diff file2.sh file3.sh > /dev/null then echo same 3,2 else echo different fi
cd ../ cd $d2 .....
I used the same code in the other directory for the three files. This is not running. I also want to know what to do when I need to compare files from different directories. i.e., JIIT, JUIT..
ABD : 5869 events, relative ratio : 1.173800E-01 , sum of ratios : 1.173800E-01 VBD : 12147 events, relative ratio : 2.429400E-01 , sum of ratios : 3.603200E-01 SDF : 17000 events, relative ratio : 3.400000E-01 , sum of ratios : 7.003200E-01
I have two files with user DN's that exported from two different LDAP directories. I wanted to write a script that reads(checks) users (cn=user1) in file Ack to see if users(cn=user1) exists in file B and give me nice output with what users are missing in file B.I have around 30k users in file A with following format..Quote:
I want to compare the following two tab-delimited .txt files (both were subsets of the original files) by comparing Columns 3 and 4 simultaneously. It is easy to compare C3 because both C3s are just numbers. But how to compare C4s?Basically, in File1, "G,G" = G in File2, "C,C" = C in File2, "A,A" = A in File2, "T,T"= T in File2.In File2, A/T in Column4 just equals "A,T" or "T,A" in Column4 of File1. C/T in Column4 just equals "C,T" or "T,C" in Column4 of File1, and etc.
I need to create a script that will compare the differences between two folders and then to copy only the updated and new files only to another directory. I know I need to use rsync here, I can write scripts so really it not how to create a script it is how do I accomplish the transfer of only new or changes files between two folders to a new file. Do I need to link these two folders first and then use the "--compare-dest" switch.