Software :: Manipulating Fixed-column Records In Bash?
Jul 20, 2011
I have a performance report that provides all the information I need to report the following: total transactions per day, average transactions per second, and peak transactions per second. It just doesn't provide any of it in a very accessible manner, so I want to parse on it and just capture the bits I care about. Ideally, I'd like the output to look something like this:
Code:
Date Total Avg Peak
07/11/11 12,328,033 24.05 64
07/12/11 9,328,429 21.98 56
The problem is the format of the input file, which is somewhat complicated. The report gives a summary of all transactions within any given second, and then totals at the end of each day, with page breaks in the middle, like so:
[Code]...
So first, the easy part that takes me to the daily summary, which gives me the date, the total transactions, and I can divide the total by 86400 to get the average per second, too. No problem. It's the last part that's got me stumped... the daily peak. I can't just do a while loop on the date, because it's missing from most of the records. And it also means I can't use positional parameters, because depending on the page break, the total will move between $2 or $3. And I need the date as a conditional to find the daily peak, because this output will have many days' worth of data.
Any ideas? Some kind of awk or sed command to insert the date wherever it's missing (I'm not particularly good at either utility)? Is there a method to parse these things based on column location that I'm not aware of?
I'm trying to manipulate a large text file full of records (metadata - one complete record per line). I need to delete every line on which certain words appear - there are five different words, all pretty simple all-caps strings with occasional whitespace. I tried using grep -v, which worked a treat, but only string-by-string. Ideally I'd like to run this as grep -v -f, where the file targeted by the -f contains the strings I need to match in order to delete the lines they're in.
i.e. grep -v -f filecontainingSTRINGS.txt targetfile.txt > outputfile.txt
When I try this, however, I don't get any matches - or more specifically, no changes are made in the output file. It works fine if there's only one string in filecontainingSTRINGS, but it doesn't work if there's more than one (I'm using newline as the delimiter). (Also my machine doesn't recognise /usr/xpg4/bin/grep - no idea what that's all about!)
I've recently inherited a bunch of files at a new job and am trying to figure out some of the problems that have constantly popped up. The one i'm getting a huge headache with results from a bash script that is supposed to change a date format from a client populated txt field to one we want defined a certain way. Everything in the script works fine, except that one function. Below is the line i'm trying to manipulate, with date examples.
The one caveat is that the first date is non-static and changes daily. It is, however, always the current date. If it helps, the second date will always be a year away from the first date.My idea was to pull the current date via perl's DATE function, but...how to do it, and calculate a year away without throwing the rest of the bash script off? Any help would be appreciated. I'm sure it's a simple solution but i know absolutely nothing about these scripts and how they were written.
#!/bin/bash ls -lhGg | while read line; do echo "$line"; done | awk ' { print $3" "$6 } '
what i want to do is be able to print column 3 and every column greater then 5. Has to be to the end of the line, since different filenames can have different amounts of words in them and the blank space is the separator. my current code works just fine if the file has no blank space.
I have a text file and I need to replace the 3rd column of that file from row 3 to the end of the file with a column which I have stored in the different text file. For e.g the original file is like given below:
So lets say I wanna replace column 3 from row 3 to row 7 with a data from another file which is given below: 54.00 239.00 53.00 237.00 52.00 165.00 235.0
So the final output file should be like this: a.txt nobla 6 gadf 72.500 1.600 1.800 .850 5.250 8.540 A# rad ang ht prf bk sd dia type blade 1 0.3081 54.00 1.9235 -17.50 18.00 -3.00 0.6250 1613 1 2 0.6509 239.00 2.0316 -17.50 18.00 -3.00 0.6250 1613 4 3 1.0128 53.00 2.1457 -17.50 18.00 -3.00 0.6250 1616 1 4 1.3748 237.00 2.2598 -17.50 18.00 -3.00 0.6250 1616 4 5 1.6986 52.00 2.3619 -17.50 18.00 -3.00 0.6250 1616 1 6 1.9347 165.00 2.4364 -17.51 18.00 -3.00 0.6250 1616 5 7 2.1327 235.00 2.4988 -17.34 18.00 -3.00 0.6250 1616 4
And I will post the code whatever i have tried soon. I started with awk and cut commands..but never got it to work and also tried PASTE command too.
I have been using comm to compare two simple column lists, and suppress items that were contained in the second list (suppression list). This was extremely simple and basic, however now list1 has two columns, and I must compare the second column in list1 with my suppression list.
Basically I need to compare my user list and suppression list to suppress any users that exist in the suppression list, then remove the second column (md5).I wasn't sure the fastest way to make comparisons if there was a similar command like comm, or if I needed to create an array of users and see if any of them matched the suppression list one by one. This seemed like it would be pretty process intensive. Anyone have any less cumbersome ideas?
I would like to make a file with all these data in one column, like
a1 a2 . .
[code]....
Can it be done with awk or some other command? Also, is it possible then do add another column in front of this one with numbers of the lines (for every previous column), like
i use this script to get the time and date of back and fourth transactions for a particular execution id. I use a substr command on the 5th column to to cut the milli seconds off the time value. - otherwise the times would look like 08:30:04.235
an operator saved a file with a ':' in it, creating a file stream (new concept to me). I'm wondering if anyone with wisdom can point me to know how to get the data from that file piped into another file.. i.e. he saved as "wrong file: rest of wrong file title.wmv"
so first, can this be salvaged in ubuntu? ..how? ..
this is for work, so I'm not at liberty to tinker & possibly mess it up more than it is.. the encoder's log also lists ":$DATA" after the rest of the file after the colon
I'd like to take a regular .pdf file (also converted it to .ps), and double the pages and switch it to landscape orientation. I've used the tools at the following website, but it doesn't exactly do what I want it to.[URL]... In particular, the left page is too far to the left, and both pages are too high up. Please see attached screenshot of one of the pages. I'd like to get this to work because then instead of printing 234-page coursenotes, it'll be half that. Does anyone have a good solution to do this?
I just tried to record a CD (standard data cd) with Brasero first and then with GnomeBaker and it failed both times right as it started to write with following error:
Code:
The errorlog from GnomeBaker says:
Code:
I already tried without burnfree but that didn't help either. Anyone know what the problem might be. It might be that the cdwriter is defective as I never tried it before (pretty new Dell workstation) but somehow I don't think so.
Our hardware that is running DNS is very old (Pentium 1 Redhat release 6.2), and I want to migrate the DNS Records to a new computer (Pentium 4 Debian Lenny). There is no GUI environment on the old server: no Gnome or KDE. I may consider using VMware Workstation for this project.
1. I don't have very much DNS experience. Is this a painless process?
2. Are there commands that I can run to export the DNS records from the old server?
3. Are there commands that I can run to import the DNS records to the new server?
4. What other advice do you have that could help this process?
I'm using Net::Bluetooth to create an RFCOMM socket on my Linux box. I need for it to register with SDP with a specific UUID. The perl code can be viewed in this link. I am unable to make the connection from a remote device however. This remote device (my Android phone) is able to connect to other RFCOMM devices using this UUID. I would like to troubleshoot the problem, but I can't find a way to browse the LOCAL SDP records of my PC where the Perl script is running (and waiting for incoming BT connections). how to browse the SDP records on the local PC?
I know BIND has rndc. Does windows have a utility that allows remote editing of records on MS DNS server FROM LINUX? I've looked at wbem a little but, have not confirmed if it allows adding/editing DNS records.
I am using awstat for a website. It's working fine. On the webserver I made a new virtual host for the same website with some addition in URL. I made a separate log file for this URL and tried to process it also in awstat but for it's log file awstat is dropping all records.
On webserver log format is same for both logs files. On awstat the config file is same for both logs except the log path line of course .Why awstat isn't able to process the same log format with the same config file what is working for the previous logs
I am having a log file where there is some discontinuous of dates are there but if i go for range of date to find the records, if is some date is missing the result is displaying full file list( Start date to end of file ) How could i restrict the result upto the mentioned date range.
I want to loop through the records in the below file (homedir.temp) /home/user1 /home/user2 /home/user3
I want to do the following activities with each record1. du -s - to get the total usage for that directory (my variable name is SIZE)2. divide SIZE by du -c for /home to get the percentage of usage. (my variable name is PER)3. write the directory, SIZE, PER to a filePROBLEMI am using the below for loop: for record in homedir.tempthe mentioned activitiesdonehe above is not looping through the records. It does the first record perfectly and exits the loop.
I had a major foul-up a couple of days ago, and I now have thousands of bug reports in the abrt applet. I'm trying to delete them individually, but it's taking forever. Is there a way to mass delete these reports to clean things up? I looked at abrt-cli and it has a --delete command, but only does them one at a time, too.
Is there any software to store recovery records that can be used for recovery from data corruption, like RAR does? I periodically rsync from a primary backup location, that I assume corruption-free, to a secondary one, that I consider corruptible - so I'd like to store in the destination some recovery data.
I am using SQLite as my database for some portable cross platform applications I am working on with REALBasic as my IDE. I have an old Sybase 8.0 database that I can access via Microsoft Access and thereby extract the data I need from each table.
Now I know I can create .csv files from each table and load them into SQLite using the import tool, but then I can't define the primary key and other field attributes. So the other option is to load each file via SQL.
Now with most SQL editors I can create multiple queries and they will run just fine. But I can't seem to do that with the SQLite interfaces. I can paste multiple queries but I can only run one at a time. And by that I mean I have to click run.
Ummm that's not acceptable since my biggest table contains over 600,000 records. I have the queries all written, that was easy using a simple interface I wrote in Access.
Code: INSERT INTO tblMeters(recordId,meterId,meterName,meterSerNum,registerSerNum,mxuSerNum,meterType,manufacture,meterModel,readType,groupId,multiplier,rollover,vendorId,xfrmerCode,bldgCode,CATEGORY,energyType,unitOfMeasure,location,access,comments,dateInstalled,dateCalibrate,pipeSizeIn,pipeSizeOut,elecMeterSpecs)
[Code]...
So is there another method I can use? I can't seem to find anything relating to my particular question at the SQLite web site
I am no expert when it comes to BIND. I seem to be able to resolve NS and A and TXT records for my domain, but I cannot get the MX records to come out. Does anyone have an idea what might be wrong with my BIND zone file? I wonder if it might have something to do with the fact that my IP is currently on a policy Block List?
I have to parse a file containing billions of records and populate them in the Data structure. I have used a lot of C++ class and creating objects of the class I am storing the information retrieved by parsing the file.
Now as the file become huge and number of objects become very large my code is getting bad_alloc error as it is not finding any space avalable in the heap for allocating new object.
i am trying this query to compare records of two different tables...i m geting this message!! no required out putvalues for these ($jobTitle $industry $stationBase $gender $maritalStatus)are coming from textboxes!here is the code...