Programming :: Going Through A Multi-step Process To Produce Output Files, Which Involves 25,000 Greps At One Stage?
Nov 8, 2010
I am going through a multi-step process to produce output files, which involves 25,000 greps at one stage. While I do achieve the desired result I am wondering whether the process could be improved (sped up and/or decluttered).input 1set of dated files called ids<yyyy><mm> containing numeric id's, one per line, 280,000 lines in total:
i got basic knowledge about creating a single child from a parent using fork(). But when it comes into creating multiple children, i am simply stuck. I am trying to create two processes from a parent and it would wait for both two processes to finish. my attempt is as below
I've a VG that contains one LV which consists of several PVs in a concat. Now I want to pvmove the whole construct to a new set of PVs in one step! Of course I could move PV by PV but is it possible to move them altogether?
I am developing a application. In this I fork() 3 childs(lets say child1 , child2, child3) . The parent is now waiting for some input from keyboard.Child3 is continously getting data from child1 and child2 using pipe which it then will print using printf.Now as the parent is waiting for input from user through keyboard while child3 is continously printing the data. I want to do it in different terminals.Can you please guide me how to proceed ahead so that on one terminal , the parent waits for input fromser while on other terminal child3 prints data.
I needed for the first time to do a slideshow of some pictures, I thought it would be a easy thing but now I have two days trying to do it. I downloaded Manslide, I spent about two hours working on it doing the slides but at end of the render process it do not produce any output. Then I discover that Manslide now is SMILE that is in Packman repo. I spent another two hours making again my slideshow but at the end it didn't produce the output video file. I have all the dependencies, I switch all of them to Packman rpms.
If you install Smile (video slideshow creator) from Packman repository, you will probably find out that it doesn't produce any output file. This project is most probably dead, homepage is 404 and it's practically impossible to get any information about this program. Therefore it took me 3 exhausting hours to get it working Here is the solution: There is a missing fake.pl file, so you have to copy it to the right place, which is /opt/smile (for OpenSuse, for other distros it could be elsewhere, for example /usr/bin). You can obtain this file from the source package but as it is very small I post it right here to spare your time:
[Code].....
So create a new file /opt/smile/fake.pl and copy these lines into it. Btw. the original file from the source doesn't have the first line, but according to the tip from different forum it is supposed to be here. If Smile still doesn't work you might try to add some extra permission to fake.pl, something like chmod 755 /opt/smile/fake. And if it is still not working check the log file ~/.logsmile.txt This file is by the way also impossible to find, there is no sign of him in the documentation.
I use tcl-expect script to ssh to the server. How can I eliminate the first 2 lines if using system(./script.sh) to execute it, as the default output will be shown on shell and the first 2 lines are included.
Essentially I just want to have the "ps" result, not the login process. code...
I noticed that one DVD I encoded to ogg (vorbis) resulted in files with varying volume levels (some files have sound much softer than others... and the original DVD is not like that). I wonder if it's mplayer or oggenc's fault. I do it like this (suppose):
Code:
for i in 01 02 03 04 05 ; do mplayer-vc null -vo null -ao pcm:filename=$i.wav dvd:// -title $i-$i; oggenc $i.wav $i.ogg; rm $i.wav; done I'm doing it by memory so I could have a mistkate or two in the sequence of commands but the idea is very simple
- Output the sound to a wav file using mplayer - Encode it with oggenc - Delete the wav file.
I want to execute a program at the very last step of Linux boot process. For Debian based Linux distro that uses BSD boot process, I came up with 2 solutions:
1) I should somehow call my program from the /etc/rc.local script (although is this the last step of the boot process?)
2) Use the "multi_end" and "sysinit_end" hooks for executing it at the end of rc.multi rc.sysinit respectively (I don't know how to do it though).
I'm completely new to scripting and I'm trying to figure out how to write a script that will get a list of all the files in a directorywn through any subdirectories.When I have the list I want to o each file in VI and change the fileformat. So far all I have been able to figure out is that VI can do the batch processing and that "ls -R" gets me the recursive file list. I'm still pretty clueless on how to do the batch process with the VI editor. I think I'm supposed to use the Ex mode but I don't know how to get the list of arguments from the filelist into the editor so they can be processed. If it matters the files were all written in a Windows editor and have gotten the MS carriage returns so I want to do a :set ff=unix command on all the files without having to go into each file manually, there are over 300 files that need updated.
I encounter a problem with my program recently, i get the core dump of my program, and it produced by a segill signal, i wondered that how my program can happen to generate this error, and i never write a "0 divide"
Are there any solutions by which linux-based web server frameworks (like zend or any other) can process data inside microsoft excel spreadsheets (*.xls)?
I want to have the output of a program go to 2 different files but not going to standard out. Is there a way to do this in bash? I know that in Z shell its really easy. omething like: Code: echo "test" >> file1 >> file2 Would work. But in Bash it doesn't seem that easy. I know that tee will send the output to 2 files but it also sends it to STDOUT.Something like:Code: echo "test" | tee -a file1 file2 Would put the word "test" in file1, file2, and STDOUT. Is there a way to just send the output to file1 and file2?
I wanted formated output of all the files under a particular directory. I am trying to use find.Something like find -P ./ -type f -name '*.cpp' -printf "%p "I want all the files with specific extension like .c .cpp .h to be printed out separated by space. One more thing i want is absolute path names instead of relative.
I need to write a script (possibly awk) that is able to process 2 files, but I am very new to awk and I have problems in how to process 2 files at the same time.The first input file is samples.txtthe formattime_instant measure
I have wrote a 1 line command that parses a file, locates the IP Address in the file and then trims the output the way I want it, and then sorts numerically and by uniqueness and then >> appends to output.txt
I can get all the IP's into 1 file "output.txt", but what I am really looking for is some type of way to create a text file, for each IP it finds labeled xxx.xxx.xxx.xxx.txt and also put that ip address into that file..
I recently bought a new laptop. compaq presario CQ41 109AU. It does not come with a pre-installed OS so i've decided to put an Ubuntu on it[karmic koala 9.10]. I successfully completed the installation procedures. however, it does not produce sound when loggin in and playing music files. I gone to look if the settings where put to mute, but they were not. I tried installing a MS Vista in the other partition and the sound worked.
I have hard drive with several thousand photos. These photos are in different formats, some are tif some jpg some raw (cr2). These files are in dozens of directories. What I want to do is produce a list of all the files, in all of the directories, sorted by the file name (not sorting on the path), listing the location, file name, size and date created. For instance I may have a file called photo1.jpg in /photos/pics/ I may also have a file called photo1.cr2 in /photos/misc/ and a file called photo1.tif in /photos/processed/summer/.
I would like a text file that would look like this: /photos/misc/photo1.cr2 2536658 2010-07-09 13:17 /photos/pics/photo1.jpg 320046 2010-07-07 14:47 /photos/processed/summer/photo1.tif 234456689 2010-07-10 09:22 Of course I want it to do this for all of the photos. I pretty sure that there is a way to do this with a minimum amount of work. I have no problem with using the command line.
I do monthly reports by copying the previous document, update the text and change the images. The images are the same size and numbers each months. Since last month I upgraded my laptop to Natty and suddenly my document went from 942 kB to 10.1 MB in .odt. When saving to PDF the usual size of 472 went up to 1.9 MB. I have searched the net and the forums but haven't seen anything about a similar issue.
I'm not sure if it's an issue that is from the previous document being produced in Open Office and now updated and saved in Libreoffice. Or if it's somehow something to do with the upgrade from Maverick to Natty. I would hope I don't have to uninstall Libreoffice and install Open Office as a solution (which I understand is not entirely easy in Natty, something I read about Open Office being transitional to Libre). I can't email simple documents to customers that's over 10 MB large...
I am trying to develop a method of reading files generated by other programs. I am trying to find the most versatile approach. I have been trying bash, and have been making good progress with sed, however I was wondering if there was a "standard" approach to this sort of thing. The main features I would like to implement concern reading finding strings based on various forms of context and storing them to variables and/or arrays. Here are the most general tasks:
a) Read the first word(or floating point) that comes after a given string (solved in another thread)
b) Read the nth line after a given string
c) Read all text between two given strings
d) Save the output of task a), task b) or task c) (above) into an array if the "given string(s)" is/are not unique.
e)Read text between two non-unique strings i.e. text between the nth occurrence of string1 and the mth occurrence of string2
As far as I can tell, those five scripts should be able to parse just about any text pattern. I am by no means fluent in these languages. But I could use a starting point. My main concern is speed. I intend to use these scripts in a program that reads and writes hundreds of input and output files--each with a different value of some parameter(s).
The files will most likely be no more than a few dozen lines, but I can think of some applications that could generate a few hundred lines. I have the input file generator down pretty well. Parsing the output is quite a bit trickier. And, of course, the option for parallelization will be very desirable for many practical applications.
I have encountered a problem:I have "while" loop; at each run a set of outputs is produced but then I need to shift them into a corresponding folder ; otherwise next run the new outputs will be over-written. Furthermore, I need to pipe what I have on the screen inside a file. I have put my code in the following: