General :: Read Three Files And Print Output In A New File?
Sep 9, 2010
file1: have DNA sequences and each sequence will begin with > symbolfile2: have protein sequence and each sequence will start with > symbolfile3: BLAST result of file2 and each result will start with query= .my problem is i have to make a report file by combining these three in such way that first sequence from file1,first sequence from 2nd file and first result from file3 should be printed in a report file
My friend have a pc with only Windows. She does not want to format it for Ubuntu but wants to have a temporal partition (without deleting the original-rented Windows machine) for Ubuntu. How can incorporate the Ubuntu (no demo) without altering beyond her pc?
Second, How can she read files of Ubuntu (documents- ODT File (.odt)) in Windows? How can she print the doc's when the printer only works for Windows?
Third, how can be deleted the Ubuntu from her machine when she be able to return the pc to thye dealer?
I am creating a script to sync my important documents between two system. I want my script to generate a log file for the last action. can you suggest me a way to achieve this.Question: If I execute the rsync command with -v flag, it will print a lot of messages on the console. Is there any way. So, I can redirect these logs to a file?
I'm using Ubuntu and I'm programing with eclipse CDT. My goal is to execute a php file and read the output to my c++ program. To do so I thought I should use fork(), dup2() and execl. When in shell, the call "php myscript.php" worked just fine, but when in c++ I tried: execl("usr/bin/php", "php", "home/geiger/workspace/SemiServer/server_content/myscript.php", NULL); And it didn't work (the process wasn't terminated and I got no output). I tried different version of this call, like losing the "php" string and/or drop "home/geiger" from the path string, to no better result.
I'm having a slight dilemma on reading data from a text file and outputting it into a table then displaying it. Basically I'm writing a shell script that takes information from text files then outputs the data into a table with 4 headings.he extracting of the data is fine, but creating a table i'm having problems with. I think it is possible to do it using the awk function, but so far i'm having a lot of difficulties.
Until now i haven't had to dabble with bash scripts.
I have a program that reads in data files. These are named datafile01_R, datafile01_G, datafile01_B, they then increment, so datafile02_R etc i have about 600 of these. the program reads in 3 data sets at a time from each run, so files_01 r, g, and b.
The program then does its magic, and outputs about 40 different files, depending on the file, they gone to folders named R, G, B, psa, or tracking.
The program itself has configuration files to say where the files should gone when analyzed, there is also the config files that reads in the data sets.
At the moment i have to run one set of data, then go in and manually change the input file location, and run again. But, doing this, even though a different data set, the new set overwrites the old set in one of the output folders. So i need a way to increment the output filenames after they are written and before the program is run again with the new data set.
i have 10 vi files . these files contain some system related information. i need to combine the output of all these files into a single file. the final file should contain contents of all these 10 files and the output should be in a tabular format.
is there any command in vi that i can use to create a table ?
i have wrote a long piece of code above with the "main" which is calling openFile( &fout, filename )filename contains the txt name in a form of "data.txt"i wanna read the data from the file and output it into fout for later use.the data in that file is a vector looking interger group.i have the following code:
I wonder capability of awk to manipulate data in consecutive multi files by read one batch file.for example I have files: data1.dat, data2.dat,data3.dat and listfile.txt
I'm working on some scheduled task script files to keep nightly backups of some of our database information in place, and it's a bit annoying when they blow up. I know how to redirect stdout and stderr to a flat file I can view when I come in, and I know that 2>&1 maps them both to the same file (whatever was named in 1). However, I'm running into some cron-time situations where it's easier to have the two streams together, and other cron-time situations where it's easier to have them separated. I can't really tell which is going to happen; is there some way I could create both kinds of output file for my scripts, so that I've got a std_err only file and an interleaved std_out/std_err file?
Note: I've looked at the 'tee' command, but I don't think it will work for what I'm after. 'tee' appears to only work with stdout; I'm trying to work with stderr.
What are the possible problem when Windows access the file from Ubuntu got Read Only even though have a full permission to read, write and execute the file? Ubuntu to Ubuntu accessing the file there is no problem only Windows got a problem.
Its my first post in here so please be patient I am trying to use regex in perl script to detect allowed words from the file and then print output to the screen.
As an example : I have text file with orders and returns :
My question: is it possible to make sure that i am ony outputing to the screen orders based on few conditions like Item,order form e.g. online.And is it possible to have multiple matches (Item2 only diplay if ordered online etc)
I have a script almost working except for 1 thing. What I'm trying to do is read a file that has the files that need to be FTP'd using a bash script. I have everything working except the reading of the file. It works outside of the ftp script I've wrote but once I put it in the FTP script it doesn't.
Here's the Script:
#Here's where the problem is that I know of
I've been playing w/ the exclamation points to see if that could be the problem, but so far no luck.
Been using gv and vi to build a postscript file. Now I would like to save the output of the program to a disk file. My problem is I can't figure out how to "print to a file".
I want my samba to keep my windows attributes exactly what the user setted in windows I mean if it has read only file in win box and copy it to samba share ,samba keep it read only and same for other attributes but it does not do it now with my configuration:Quote:
[global] workgroup = DOMAIN server string = File Server
Can windows read files from a home file server with an ext4 file system? or do I have to partition the drive with the server (ext4) and an ntfs partition with the files on?
I am trying to compare a list of patterns from one file and grep them against another file and print out only the unique patterns. Unfortunately these files are so large that they have yet to run to completion. Here's the command that I used:
Code: grep -L -f file_one.txt file_two.txt > output.output Here's some example data:
Pavillion laptop crashed and I can not find all of my recovery discs. The computer will not allow me to do an internal recovery so I ordered a recovery download on line and burned to a disc. when I put the disc in the graphics came up and in started in Live-mode but then the error mess came: error while loading shared libraries: /usr/lib/libpci.so.3: cannot read file data: input/output error. The site said it was an ISO file but then it had a program it had me run to change it and burn the disc.
I cannot print pdf files. I have tried using okular and xpdf. The documents display in the program, but print preview shows a blank page. The printer then sends out blank pages. I have tried printing on 2 different printers using usb cables. Using terminal to process the commands shows error:
when I run a tcl script using ns-2.30 and get a result and run the sam script in ns-2.29 and get an error and run the script file in ns-2.33 and get a result but the output filr (out.tr) is less in size than the output file of ns-2.30 i and also in the details of time scale , for example one out.tr file contains 19000 line (2 M byte size) and the other out.tr file for the same tcl file using the ns-2.33 version contains 10000 lines (1. M byte size) . Does that make since ? also in result file nam file for ns-2.30 it has some drops but the ns-2.33 has no drops! when I run the name !!!. is the time scale for simulation tunable or can be aligned?
I need a script that dose the following checks if files exists by read input from a file then compares them to the files listed in the directory if they don’t exists the script would report back which file dose not exists. I also need to format the output so that files are grouped in different groups, group A, B, and C and etc based on file name. I would like the output of that do not exists files to be sorted based on second number in the file name than group according. I understand some of the basics of bash scripting something along of the lines of a loop and if statements might do the trick. Below is what I have so far. I don’t car so much about the script reporting back the file exists I prefer to only know if the file is missing and is less than 3 days old. Problem is if a file dose not exists in the reports file the test compares against the wrong file.
I have 2 external hdd in wich I have all my files. yesterday, I have copied all the files from hdd2 to hdd1 and I want to eliminate duplicates so I used FSLint to find them,now I want to make a shell script to delete all the files/entries (read from the log file) that begin with.
I want to configure file printer (print to file) on my rhel-5 machine in such a way that if users fire print command from windows xp it should create an individual computer wise txt file on my linux machine. File name should be different for each printer.
Just installed drivers for Lexmark pro200 - S500 series. However although it is recognised by the system and has a tick by its name, And tells me it is connected via usb file goes to print que but will not print. Also tells me it is Printing - localhost! is this correct?.
I have 2 external hdd in wich I have all my files.... yesterday, I have copied all the files from hdd2 to hdd1 and I want to eliminate duplicates so I used FSLint to find them, now, I have a txt file that looks like this:
Code: /media/My Book/!!!MIS DOCUMENTOS/Documentos/2 sep2003-jun2009 USB/!TESIS/TESIS/TESIS CVT LABVIEW Y CODEWARRIOR/LabVIEW85RuntimeEngineFull.exe /media/My Book/HDD_Toshiba/Borrable/Pen_Drive_4GB/Tesis/Super CD de la tesis/LabView/LabVIEW85RuntimeEngineFull.exe multiplied by millions of entries...
now I want to make a shell script to delete all the files/entries (read from the log file) that begin with:
Code:
/media/My Book/HDD_Toshiba/**** Since HDD_Toshiba is the folder in hdd1 (MyBook) that contains all the files from hdd2
I am working on a script that allows me to convert an IP address to a country name. I have 2 files. One that has text like: PORT.80 TCP SRC=x.x.x.x and the other is x.x.x.x United States. How can I combine these files so that the file output is PORT.80 TCP SRC=x.x.x.x United States?