I need to insert 3-4 lines of text to the beginning of a text file. The file is a largish MYSQL dump, the result of a backup shell script. This shell script should insert the required text.I've wrestled with sed, but lost.
I am on Ubuntu 11.04 and using Libre Office 3.3.2 to compose new documents and am saving them using .doc, .ppt and .xls files. (due to having to share them with others who are on Windows systems)
I have a lot of doc files and I need to search for text INSIDE these files. I am perplexed with the fact that no search tool is able to search for text INSIDE these file types. "cat" can display them of course, but grep is not able to locate text INSIDE these file types. I even tried to save a .doc file as an .odt file, but no luck. The Applications>Accessories>Search for Files does not search INSIDE doc, xls or ppt with the option "Contains the text".
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
15 this is a sentence containing various words and spaces 34 this is a another sentence containing various words and spaces
cat file2.txt
2 this is sentence1file2 6 this is sentence2file2 54 this is sentence3file2
I would like to join these 2 files. The result should look as follows :
cat joinedfile.txt
2 this is sentence1file2 6 this is sentence2file2 15 this is a sentence containing various words and spaces 34 this is a another sentence containing various words and spaces 54 this is sentence3file2
==> so the joined file must be sorted on the first number. Any ideas how this can be achieved ?
im trying to output a list of running processes via a shell script. At the moment i got this which outputs the processes to a text file called out.
echo $(ps aux) >>out
The problem is though, the processes are all just one big block of text which makes it hard to read. Does anyone know how to sort the output to a text file so that it prints to the text file at 1 process per line? I know its probably simple but im very new to linux.
I have a lot a folders, each named by a number, and in each of these folders I have a specific file (stddev.dat) containing a single line (of numbers) I need to have a single file with each line being one of the stddev.dat (no matter if it is sorted or not), and also I need to add at the begining of each line the number of the folder it comes from.
I 'm no bash expert, and the "add at the begining of the line" is a bit of problem to me". Here is what I've come up with so far, just to put everything in one file, (and also if you know a better/more elegant way to do the same thing I've done, I'm listening)
I would like to write a text user interface (TUI) to adjust some text config files etc. Is there a tool or application for creating TUIs like this. I�m talking about those types of config tools which you see executed at first boot.
I have a hidden folder with a lot of text files in it. I would like to search in this folder for all files containing a given text.The File Browser's" FIND searches only in the file names, not in their contents.The FIND function of Ubuntu does not allow me to search ONLY in the given hidden folder. So, how can I find my files within the hidden folder with the given text within them?
may be an advanced question but I need to know how to do this. Here at work I am in charge of recruiting and we have about 1,000 resumes in already. All of the resumes are in a .pdf format. I need to rename every .pdf in the following format:{firstnameLastname}.pdfThe only way I know how to do this is to convert all the .pdf files to text, extract the name out of the first few lines of text, import into excel, and then use VBA to rename the files in mass:Here is my logic so far:~Deskop/a = houses all the .pdfresumesOpen terminal: Code: cd ~/Desktop/afor f in *.pdf; do pdftotext -raw $f; done That will convert all of the preceding resumes into text filesNow I would like to append the name of the text file into the last line of the text file. So, for example, for Resume1.txt, I want to append "Resume1.txt" to the last line within Resume1.txt. So after I run the command I open Resume1.txt and on the last line within I want to see "Resume1.txt" on the last line, at the end of the resume.How can I do this? I would like to use a loop and have the terminal append the filename to the body of the text file until all of the have been appended.
I put a text file on my desktop and added a couple lines of text with gedit. File type shows text/plain. Double-click opens the file in gedit which is what I want. I'm using the file to temporarily hold some snips of code that I copy from file to file, but when I copy some html into the file and save it, now file properties show it's text/html and a double-click opens the file in firefox, which isn't what I want. Is there some way to keep the file type from changing itself?
I have to delete a certain line of text from the a textfile via ubuntu's shell scripting.I have done research, and it seems that most people advocate the usage of sed /d option. sed makes does not edit the text file. Hence, most options I discovered involved the use of a temporary variable/textfile and then overwriting the old file with the temporary new file. Is there anyway whereby I can bypass the use of temporary storage containers? I hope there is any magical combination of commands to edit the file directly.
I want to display something in my text view widget in glade using c code. that's all right. now I need to attach a save button beneath the text view.so that on click the text view content should save as a txt file..
I want to display the contents of a particular log file (simple text file, I mean in Linux). But there is a problem: The contents need to be organized in a fixed format. Have a look at this log file:
So, while displaying the contents of above file on a web page, I want to format the field names found in the log file: User Name:, Reported Problems Description:, and Remarks:. These fields may contain a variable length of text and no specific line number is assumed for them to appear on.
Well, what I am trying to do may sound wierd to some of you. The filed "Reported Problems Description:" can possible contain text which embeds colon (.
Basically I have a dir that contains my makefile and another directory inside this called source this holds main source files. External to these I have a couple of dirs common and drives.
In my make file I use
To include the protoypes from the headers in the folders common and drivers used by source, this works fine. However in common and drivers I use a few variables that are set in the source dir. I set these with externs inside the common and driver files. However I'm sure I should be able to set the directory path for source inside my makefile. So say I have inside source hardware.h with prototypes, I set DINCDIR = -I/Source -I../Drivers -I../CommonFiles
Then from a c file inside my common folder I say #include "hardware.h" the file should be able to see hardware.h and it's protoypes. However I get:
Is there some way I can get the extern dirs to see the source dir?
How to list the contents of a folder to a text file. I'm trying to list all my music, including all subfolders, etc. to a text file, but I can't remember the command.
I am looking for a way to keep a log and make if then statements if a line exitsts in the log. I also am looking for a way to make a simple loop, like goto line number, and I also am wondering how to add/remove bits of text from a text file (plugins line in server.properties)
a sed command to add a text before line number in text file? I have text file with 500 lines, and i want to add 3 more lines with text after line 300, OR before line 302, isn't no problem.
How would i go about copying all .jpg or .JPG files from a folder and all its subfolders to my /usr/name/pictures folder? I'm guessing I'd have to use some sort of .[jJ][pP][gG] to get all the pictures from other examples i've seen, but really not sure how to use that in a recursive cp.
I want to copy location of every .avi , .jpg file present in a folder or in subfolder present in a direcotry and save in a textfile how to doex : /home/username/Desktop/bookofeli/video/book.aviit should give full locaiton of path how to do
Is there some kind of universally recognized pragma that one can insert at the beginning of a text document to designate it to be UTF-8 encoded (or any other encoding)? I've seen certain editors insert encoding comments, and one or two compilers that have an encoding pragma. But I was wondering if anyone has tried to establish some kind of universal tag format for text documents.
i have a folder with 250 subfolders and each one of them has (at least) one image in it (along with other stuff)how can i 1)copy all the images from those subfolders and paste them into one folder together (other than by hand, obviously)?2)optional : copy only the images of a certain size and above?
I would like to know how to move all the files from a single folder and its subfolders to a single, different location in as few steps as possible. For example when I download files from one of my school's websites, the file I want is located in a deep sub-directory. So, I have to cd many times just to get to the file I want. Is there a way to recursively move all the files within a folder's subdirectories into a new location?