General :: Bash Script To Suppress Matches In Two Column List
Sep 2, 2010
I have been using comm to compare two simple column lists, and suppress items that were contained in the second list (suppression list). This was extremely simple and basic, however now list1 has two columns, and I must compare the second column in list1 with my suppression list.
Basically I need to compare my user list and suppression list to suppress any users that exist in the suppression list, then remove the second column (md5).I wasn't sure the fastest way to make comparisons if there was a similar command like comm, or if I needed to create an array of users and see if any of them matched the suppression list one by one. This seemed like it would be pretty process intensive. Anyone have any less cumbersome ideas?
I have a text file and I need to replace the 3rd column of that file from row 3 to the end of the file with a column which I have stored in the different text file. For e.g the original file is like given below:
So lets say I wanna replace column 3 from row 3 to row 7 with a data from another file which is given below: 54.00 239.00 53.00 237.00 52.00 165.00 235.0
So the final output file should be like this: a.txt nobla 6 gadf 72.500 1.600 1.800 .850 5.250 8.540 A# rad ang ht prf bk sd dia type blade 1 0.3081 54.00 1.9235 -17.50 18.00 -3.00 0.6250 1613 1 2 0.6509 239.00 2.0316 -17.50 18.00 -3.00 0.6250 1613 4 3 1.0128 53.00 2.1457 -17.50 18.00 -3.00 0.6250 1616 1 4 1.3748 237.00 2.2598 -17.50 18.00 -3.00 0.6250 1616 4 5 1.6986 52.00 2.3619 -17.50 18.00 -3.00 0.6250 1616 1 6 1.9347 165.00 2.4364 -17.51 18.00 -3.00 0.6250 1616 5 7 2.1327 235.00 2.4988 -17.34 18.00 -3.00 0.6250 1616 4
And I will post the code whatever i have tried soon. I started with awk and cut commands..but never got it to work and also tried PASTE command too.
I am doing a spreadsheet for work (for importing data into a new database) and I have hundreds of image files which need to have just their file names in one column called Product Code. Is it possible to use the ls command to list the contents of a directory in one single column so that I can copy and paste to the spread sheet?
Also, is there a single command to remove the file extentions for a batch of files? Bulk rename is what I need, I guess but just to remove the file extension (.jpg on all of them). The normal use of ls lists them in multiple column form and when I copy and paste those it will not let me copy just one cloumn at a time. The spreadsheet only has three columns:
New Product Code Old Product Code Pictures New Product Code will be left black, and the Old Product Code is just the image names. The picture column will be the patch-to-the-image for each image. I am not sure that is even possible in a spreadsheet.
#!/bin/bash ls -lhGg | while read line; do echo "$line"; done | awk ' { print $3" "$6 } '
what i want to do is be able to print column 3 and every column greater then 5. Has to be to the end of the line, since different filenames can have different amounts of words in them and the blank space is the separator. my current code works just fine if the file has no blank space.
I have a performance report that provides all the information I need to report the following: total transactions per day, average transactions per second, and peak transactions per second. It just doesn't provide any of it in a very accessible manner, so I want to parse on it and just capture the bits I care about. Ideally, I'd like the output to look something like this:
Code: Date Total Avg Peak 07/11/11 12,328,033 24.05 64 07/12/11 9,328,429 21.98 56
The problem is the format of the input file, which is somewhat complicated. The report gives a summary of all transactions within any given second, and then totals at the end of each day, with page breaks in the middle, like so:
[Code]...
So first, the easy part that takes me to the daily summary, which gives me the date, the total transactions, and I can divide the total by 86400 to get the average per second, too. No problem. It's the last part that's got me stumped... the daily peak. I can't just do a while loop on the date, because it's missing from most of the records. And it also means I can't use positional parameters, because depending on the page break, the total will move between $2 or $3. And I need the date as a conditional to find the daily peak, because this output will have many days' worth of data.
Any ideas? Some kind of awk or sed command to insert the date wherever it's missing (I'm not particularly good at either utility)? Is there a method to parse these things based on column location that I'm not aware of?
I seemed to configure my gammu and changed the version indicated version on my gammu table from 10 to 11 to match the version of my sql, upon starting my gammu-smsd the error: gammu-smsd[3429]: Error code: 1054, Error: Unknown column 'Signal' in 'field list' gammu-smsd[3429]: Error code: 1054, Error: Unknown column 'Signal' in 'field list' gammu-smsd[3429]: Error code: 1054, Error: Unknown column 'Signal' in 'field list' Kept on repeating on an infinite loop
i use this script to get the time and date of back and fourth transactions for a particular execution id. I use a substr command on the 5th column to to cut the milli seconds off the time value. - otherwise the times would look like 08:30:04.235
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
I'm trying to write a bash script that gets the list of files in a directory and puts them into a variable, then checks each entry and outputs them as follows:
item1 is a FILE item2 is a DIR item3 is a DIR etc etc.
I am able to get the list of files into a variable, but unsure how to get the output I want.
I need to fgrep a list of things which are in a file. The file in which I will do the SEACHING is a large text file and I need fgrep to output each item from the list as a file with the item from the list as the file name.
I would like to make a file with all these data in one column, like
a1 a2 . .
[code]....
Can it be done with awk or some other command? Also, is it possible then do add another column in front of this one with numbers of the lines (for every previous column), like
Using rsync, which switch will allow rsync to suppress the "skipping non-regular file" messages in the output while still being reasonably verbose for everything else?
I have a file witch I need to list 10 line by 10 lines with something like press enter to go on in between. Well, the problem is that i have absolutely no idea on how to implement this.
I am an uploader to a various hosts, so this tiny script me a lot. I make a rar archive and split files with 100mb. I could get 3-4 or even 76 parts of rar files and it would take me some time to paste all these urls to remote upload function of filehosting sites. For example:
Code:
server:/home/cober/downloads/teevee# ls -al total 358784 drwxrwxrwx 2 root root 4096 Dec 8 19:38 .
I'm running Ubuntu Server 9.10 and every time that I su in as root and run a command, I get a little message after saying: "You have mail in /var/mail/root" And in fact, I do have mail in there, it's mail that I would like to keep for future reference. How can I suppress the message though?
I'm writing a bash script to copy a list of files and do some stuff to them. Basically, I have the code written that does what it needs to do, but I can't quite understand why it works. I was hoping someone could clear up my understanding a bit.
Code:
The first line generates a list of files. I wrap each line in quotes because they usually have spaces in the directory names.
The second line changes IFS, and I understand what IFS itself does. What I don't quite get is what the separator becomes with that echo statement. If I'm reading that correctly, the backspace will remove the newline and essentially the result is nothing? I found this solution on a web page somewhere, but it was years old and there was no real explanation.
need a command or script to list all files recursive without directories one line per file, no extra lines like ls -AR1 should print file size and name eg.:
I have OpenSuSE 11.2, and I am learning Bash scripting. I was wondering how I would make a bash program, That I could have it check multiple craigslist sites [western mass, worcester mass, etc], e.g. in the Free category, and/or enter keywords, and have it update every 5 minutes, and then post the results somewhere, to a file, or even upload it to a server, or if it's even possible? (Kind of like the program "Ad Notifier for Craigslist") Would I have to do this in C/C++? If it has to be done in C/C++ It would have to be windows compatible preferably.
- I have wireless and don't normally use an ethernet cable. From Googling round I found this bug report https://bugs.launchpad.net/ubuntu/+s...ux/+bug/270184 which appears to match the problem, however I do not feel happy/competent enough to recompile the driver as suggested in the bug fix. Is there a way to configure the driver so that it only tries the eth0 connection a limited number of times? This is Ubuntu 8.04 2.4.26-27 and SIS 191 chipset