I have a gedit text file 2.2MB. I want to convert it into two orhree smaller files/volumes, so I can upload them seperately to web pages. Does anyone know a quick and easy way to do this?
How do you convert Open Office (ODT) documents to Text files? I have made a report using libre office. Now I wish to continue editing the document using lyx (latex front end). So the ODT file needs to be saved as some .tex file.
I don't see an option to do this in File menu (export/save as). So is there any other plugin to do this?
I need to be able to convert HTML email messages saved as text files (.eml or .msg) to PDF documents, one PDF per email, retaining formatting and images.
Are there any Linux tools that will allow me to do this from the command line (so it can be scripted)?
i'm trying to convert a html file into a text file when i simply run "html2text <filename>" the output displayed is the way we want but when i redirect the same using "-o" or ">>" the file is having extra characters in it. i even tried -ascii,but no much use.
i working with a simulator tool that i need to pass to it a file in .BIN format, basically i need to convert from a tex plain file to BIN file How can i do that? there is some command(s) that allow me do
I have an pdf file on my linux RHEL 4.7 machine. I can open that file but when i click on 'saveas' to save the file in 'Text' format there are no options i see there. I need to save the 'pdf' file to 'text' format. could anyone tell me how to save the pdf file to Text format. Iam using 'KDE'
i am collecting usb usage details of all users and convert it into csv files so that i can export it into some database..the output desirable is in csv format for database with some batch or awk script.
In order to make this conversion I have to use a text editor. This is tedious. Is there an easier way to do it, like some program I can run from the Linux or OSX terminal?
I want to convert many text files(copied from windows workstation) into utf-8 encoding file. Yes, iconv is available for it. However, I have to give source file encoding at the command line parameters! The problem is, at most case, I am not sure the source encoding of it. And, I also want to use a script to convert many files recursively.
i just touch linux, may i know how can i convert the core dump file to a readable textfile, which include all the information, which is in core dump, such as all variables, threads information, call trace for each tasks, and so on. i know use the GDB can view this, but it won't dump all the informations to one text file. but sometimes, people want to view the core dump reason without Linux environment.
I have on my windows machine several hundred files that are a format of .nc .ncs for a CNC machine. I need to convert them to txt which is something as easy as opening in notepad and then saving as .txt but there are so many that this kind of action would take way too long.
The reason I am writing the linuxquestions is because I would feel more comfortable in loading a live CD and using some sort of terminal command to do this than I would to download one of the many "freeware" type programs I have found for windows (even more so since I have had a root kit before and had to start all the way over to get rid of it).
I need to know:
1. Is this possible to do with the terminal without super advanced knowledge.
2. Can one please point me in the right direction; something to read or an example
15 this is a sentence containing various words and spaces 34 this is a another sentence containing various words and spaces
cat file2.txt
2 this is sentence1file2 6 this is sentence2file2 54 this is sentence3file2
I would like to join these 2 files. The result should look as follows :
cat joinedfile.txt
2 this is sentence1file2 6 this is sentence2file2 15 this is a sentence containing various words and spaces 34 this is a another sentence containing various words and spaces 54 this is sentence3file2
==> so the joined file must be sorted on the first number. Any ideas how this can be achieved ?
I'm trying to convert all file extensions for files in many sub-directories from uppercase to lowercase. I have two problems, how to list the absolute path to the files recursively over many sub-directories for which I so far have this:-
Code: find ~/Photos -print which would be fine, except it gives the directories on their own when it finds them rather than just the files with absolute paths. I couldn't find a switch for the "ls" command to do this, so I had to improvise with "find". and once I get grab each absolute file name, to just change the file extension rather than the entire file, which is what I have at the moment.
Through various Windows reinstalls and switches within Linux distros, I have a massive amount of duplication within my music archive (on the order of 7+ dupes of each file). Now, I found a lovely program called "fdupes" and was able to build a list of all the duplicate files, and I'm trying to use "xargs" to remove then. However, when I try and run the command "xargs -0 --arg-file="dupes.txt" rm" or "xargs -0 rm < "dupes.txt"" it give me the following error: "xargs: argument line too long".
how perhaps a different way of accomplishing the same thing?
I am using 10.04 LTS Lucid, and I notice the free space of root is getting smaller and smaller.
Five months ago, there was about 3.9GB free space of root, but now it is only 1.6GB. I always run sudo apt-get autoremove and sudo apt-get autoclean every time the update is finished, and also use Bleachbit to clean the system, but both are useless.
I never faced such problem with older versions of Ubuntu, is there any measure to fix it?
I've got a text file with a list of .gz files, these .gz files are in various sub directories of one parent directory and I've hacked this little script together to copy them from their current location to a new one and spit out any it can't find to "/home/user/not_found" but for the life of me can't get it to run properly!
im trying to output a list of running processes via a shell script. At the moment i got this which outputs the processes to a text file called out.
echo $(ps aux) >>out
The problem is though, the processes are all just one big block of text which makes it hard to read. Does anyone know how to sort the output to a text file so that it prints to the text file at 1 process per line? I know its probably simple but im very new to linux.
I have Ubuntu 9.04 and just installed Sound Converter. I am trying to convert a bunch of .ogg files to mp3 to play on my iPod and it's not working so well. In the Sound Converter options I have is set to convert to high quality mp3. I choose the folder that the files are in and after a moment (slow laptop) Sound Converter populates, I hit 'convert' and it shows that the conversion completes in two seconds. All that it did was create the new folder structure of artist/album but there is nothing in there. Not sure what I am missing. I used Sound Converter before and it worked fine.
I'm trying to use convert, I have installed the imagemagick. I use this line:convert *.jpg test.pdf but I'm only able to convert to pdf 1 single jpg file, not multiple files at once. When there's more than one file, I get the following error: Segmentation fault
I checked the 'Run executable text files when they are opened' option in Nautilus preferences. I have noticed that files such as .sh and .bin launch by simply clicking on then (which is great). However I have also noticed that an ordinary .txt and .html file must not be marked as executable in order to launch it in Gedit and Firefox respectively via clicking. Otherwise you must right click and open with every time. What file types need to have execute permissions? What file types never need to have execute permissions?
I am trying to upload some pics on my Facebook account using Firefox. When I click on Facebook's file upload icon, Firefox bring up a 'File Upload' window. I noticed that smaller image file is previewed on the lower right hand corner, while bigger image file is not. Is there anyway I can change this behavior or maybe change what Firefox is using to browse my files?