I am combining data from a couple different input files and creating an output file in a specific format. I notice that if I use the >> operator, information gets appended to a new line in my output file. This is useful, but if I'd like to append onto the CURRENT line, is there an easy way to do this? I've been googling around and see lots of complicated answers, nothing that suggests to me an easy way to do this. For example, if my output file looks like this:
b1a:] cat test hello my name is b1a:]
and I'd simply like to append "Bob", how can I do it? If I use
b1a:] echo Bob >> test b1a:] cat test b1a:] hello my name is Bob b1a:]
So what I would prefer is some command that would create the result:
I need to be able to convert HTML email messages saved as text files (.eml or .msg) to PDF documents, one PDF per email, retaining formatting and images.
Are there any Linux tools that will allow me to do this from the command line (so it can be scripted)?
I don't understand why this is so difficult.In the old days, there was lpforms which allowed some formatting. CUPS did not see fit to implement this into it's lp package.cgi-...-cgi?lpforms+1In the old days, lpr allowed you to select a font in the command line with -1=fontname. CUPS did not see fit to implement this into it's lpr package.htmIn the old days, printers had fonts installed on them that you could access. Modern printers don't seem to have this. So now I still need to be able to select a font when I print certain text files from the command line but it seems this is impossible. I've been working with instances and lpoptions, which allows me to do a lot of other things I need like orientation and margins and even set the font size, but I still cannot choose a font other than the default.
I've been reading tutorials of Linux sed command, but haven't got anything yet. the problem is : I want to insert a line into my DNS database file which has a pattern like below:
<Domain name> 3tabs here <IN> <A> <ip address>
the question is : how to add a line into a file like this using linux sed command? I have problem inserting tabs and the spaces!
simple scan error as follows: Failed to save file ImageMagick returned error code 11 Command line: convert -adjoin /tmp/simple-scan-DA9MBV.jpg /tmp/simple-scan-XCK4BV.jpg /tmp/simple-scan-NZVYBV.pdf Stdout: Stderr: using karmic note: I have apparmor extra profiles installed but didn't notice one that related to simple scan or imagemagick. Red herring or not?
just started with Linux and I'm trying to create new files for each line of my input file. So I have file with 637 lines of data:
2 4 6 2 8 5 3 0 5 etc
and want to create a new file from each line. With
cat name.txt | awk '{ line = $0 print line }'
I nicely see all lines, but what rests is to save each line separately into new file. I tried While read line command in combination with output >> $.txt, but didn't work well..
I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. how to do this either with gnuwin32 command line tools or in linux shell?
my webpage PHP below. I would like to enter "Hello, in the main inputbox field" below (You are editing: textfile.txt) and click "SAVE" directly from command line.
Sort of : wput_php "hello, in the main inputbox field" click save, and here is it. the text would be uploaded.
im trying to output a list of running processes via a shell script. At the moment i got this which outputs the processes to a text file called out.
echo $(ps aux) >>out
The problem is though, the processes are all just one big block of text which makes it hard to read. Does anyone know how to sort the output to a text file so that it prints to the text file at 1 process per line? I know its probably simple but im very new to linux.
i'm using ubuntu with the GUIi have a .pps (power point presentation) on the desktop. I installed the powerpoint viewer and made it the default program for opening the file.when i double click on the file everything works.my problem is i need this on a schedule so i downloaded scheduled task.in scheduled task they ask me the command line i want to execute and that's where it doesn't work. I checked the "allow executing file as program" box on the file but i get the error cannot execute binary file.
I am making a text search engine. I need to first convert binary documents to text. I want to go with cross-platform (we develop both on windows and linux) command line (so that I can get the output via python subprocess). What are the choices for this?
In Windows, if I have a console window open, type winmine, and press enter, Minesweeper will appear, completely separate from the cmd program. The Minesweeper instance is not tied to the command prompt in any way that I know of, with the exception of Minesweeper's parent being set to that instance of the command prompt. It's different in Linux, however.
In Linux, if I have a console window open, type emacs and press enter, Emacs will open, but it seems tied to the command line. Specifically, it appears that I can't use the command line anymore until that instance of Emacs is closed. Is there a way to replicate the Windows behavior in Linux?
I was wondering if it is possible to append some text to the output of ls. Like say, if i wanted to create symbolic links for all the files under a folder in my hard disk to a folder on my desktop, I could say (Pretty sure this won't work, but I am looking forward to something like this) echo ln -s | ls . This should append ln -s to all the files of ls.
I have to delete a certain line of text from the a textfile via ubuntu's shell scripting.I have done research, and it seems that most people advocate the usage of sed /d option. sed makes does not edit the text file. Hence, most options I discovered involved the use of a temporary variable/textfile and then overwriting the old file with the temporary new file. Is there anyway whereby I can bypass the use of temporary storage containers? I hope there is any magical combination of commands to edit the file directly.
I have large text files with space delimited strings (2-5). The strings can contain "'" or "-". I'd like to replace say the second space with a pipe. What's the best way to go? Using sed I was thinking of this:
I have a utility that works with files. The utility is crashing at after about 120 files. The input to the utility is a file containing a filelist. I want to cut the file with the file names in it to seperate files containing about one hundred or so. My thought was to determine the number of lines/100 and then use head and delete to create temporary files to run the utility multiple times to prevent the crash. When I tried to create a variable using the wc -l command the output gives me the number of total lines but it also includes the filename of the input file. (873 Filename.txt) I can not figure out how to remove the Filename.txt from the variable.
I have a nearly 10 years old iMac at home and installed Lenny on (somehow, I had to thrash Mac OS 9.2 :-P). Everything was successful until now, but on entering "startkde", the screen fills withkpersonalizer: cannot connect to X serverlines. Upon looking at other threads here I tried mdetect, installing X11... but to no avail
I have two txt files containing x and y coordinates: xcoord.txt & ycoord.txt. I need to open them; read them line by line to get each coordinate; then each time I need to update Xs and Ys parameters inside another file called "dc.in" with the grabbed values.
Finally each time I need to run two exe files ( dc_2002 and st_vac) and produce corresponding output for each Xs and Ys ( dc.in is an input file for this exe files)
I have written the following code but it does not work:
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
a sed command to add a text before line number in text file? I have text file with 500 lines, and i want to add 3 more lines with text after line 300, OR before line 302, isn't no problem.
bash 3.1.17(2) I'm trying do write a shell script which must operate on each line of an ASCII text file. So, all the code must be inside a loop, and inside the loop, the first thing should be to read the next line from the file. I have the bash read command. But it reads from stdin. Any way to make read from a file?