i am on processing text tasks And i found that if you assign a text to a variable is chomp'ed automatically the newline
Code:
variable=$(cat file.txt)
The problem is i can only access the items/lines using:
Code:
for line in $variable do echo $line # Other commands done
how do i convert this to an indexed array. More importantly, how do i get access to individual $line[0], ..., $line[n] Another thing, if the file.txt, has lines with spaces it is a mess using the for...in..., but echoing prints line by line...o_0
I need to insert 3-4 lines of text to the beginning of a text file. The file is a largish MYSQL dump, the result of a backup shell script. This shell script should insert the required text.I've wrestled with sed, but lost.
I have a plain text file with 360 lines of varying length text. How do I add a comma or other symbol to the end of each line so that I can convert the file to csv format that I can open in a spreadsheet (45 rows, 8 columns). That means each 8 lines of text forms 8 columns, with 45 rows.
I have a clump of text that needs to be broke up:gdbm Sat 07 Feb 2009 03:28:18 AM EST libattr Sat 07 Feb 2009 03:28:18 AM EST db4 Sat 07 Feb 2009 03:28:19 AM EST mktemp Sat 07 Feb 2009 03:28:19 AM EST keyutils Sat 07 Feb 2009 03:28:20 AM EST pcre Sat 07 Feb 2009 03:28:21 AM EST setserial Sat 07 Feb 2009 03:28:24 AM EST zlib Sat 07 Feb 2009 03:28:24 AM EST gawk Sat 07 Feb 2009 03:28:25 AM EST readline Sat 07 Feb 2009 03:28:26 AM EST rhpl Sat 07 Feb 2009 03:28:28 AM EST cracklib-dicts Sat 07 Feb 2009 03:28:37 AM EST setools Sat 07 Feb 2009 03:28:37 AM EST hal Sat 07 Feb 2009 03:28:38 AM EST which Sat 07 Feb 2009 03:28:39 AM EST Is there a way to get everything after the EDT in the text to be moved to a new line?
For example, I have a text file with data which lists numerical values from two separate individuals
Code: Person A 100 200 300 400 500 600 700 800 900 1000 1100 1200
Person B 1200 1100 1000 900 800 700 600 500 400 300 200 100
How would I go about reading the values for each Person, then being able to perform mathematical equations for each Person (finding the sum for example)?
I am trying to construct a quick regex that will search for six lines of text without a clear line break between them. It only needs to search, not replace, as I will be using in gEdit (with regex plugin) anyway.
It's for editing subtitle files. The video player I will be using them on can only cope with 3-line subtitles, so I just need to edit any in the srt file that contain four or more. There won't be many so I can do it manually. For example:
26 00:01:47,357 --> 00:01:49,359 a motivated business professional with clearly defined goals.
[Code].....
but .* seems to mean "any character, or none", so that doesn't work. My experience of regular expressions is limited, but I do know they are very powerful when used correctly!
I would really like to disable how on most KDE programs the boxes that have text in them it has a two tone line scheme, I am probably not explaining it too well but I will attach a picture. I like the color scheme I have now, but for some reason it uses black for the second color. I have looked in appearance settings and haven't found anything that changes it.
I run ubuntu 10.10With moovida 1.0.9(elisa)I get this red lines over all text in moovida.The below is not my screenshot but exactely same problem, exept i have red lines
Is there a way, besides writing a PERL program, to read each line one by one in file A and tell if this line also exists in file B? Can this be done via a shell script?
Is there anyway to delete certain paragraphs within a text file and then insert the paragraph into another text file.I just cannot figure out how to remove the specific lines from the file and then insert them into another file at a certain line within that new file. Thanks again
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
I want to scroll back 10000+ lines in text mode linux terminal. As there is an unlimit option in gnome-terminal, so I guess if this is also possible in text mode?
Contained within each of these 67 text files is about 1 million urls. Yes. I have 67 text files that contain 1 million lines of urls each. I am sure I am swimming in duplicates. I tried opening one text file and clicking sort ----->remove duplicates. Now Gedit is not responding my processor is maxed out to 100% and I think I am finally ready to delve into some command line code. Can anyone give me idiot proof instructions on how to sort the duplicates out of each one of these 67 text files? How about no duplicates across all 67?
I installed Ubuntu server (32-bit) on a Dimension 2200 (256mb ram, celeron processor). When I boot, i get white and black lines over the screen which makes the text beneath it impossible to see. I've (attempted to) post an image showing this.Is there any way to fix this? The text-mode installer and the GRUB menu display fine.I've used ubuntu on many PCs and never seen this happen, and I couldn't find any results when I googled the issue.
As much as I didn't want to ask a sed question, especially considering there's already one on this page I've looked as best I could and cant find the solution. Id like to use sed to replace occurrences of a pattern but exclude two or 3 specific lines that are not consecutive. For example I know with 1,10 i could exclude the first 10 lines, what is the syntax if I just wanted to exclude line 3 and 7. The sed command I'm working with right now is for rearranging Ethernets.
cat /etc/udev/rules.d/70-persistent-net.rules | sed -e '/'"$found1fullmac"'/!s/eth1/'"found1eth"'/' > /etc/udev/rules.d/70-persistent-net.rules
I would like to replace $found1fullmac with two variables representing line numbers to exclude from the replacement.
I need to chop of the top 30ish lines of several log files until a line starting with "Initialization completed."The trouble is that it's not always the same amount of lines that need to be deleted, and they don't always contain the same information, which is why I would need to delete everything priorhe line starting with "Initialization completed."Right now I have a little script I wrote based on looping each file through several "grep -v" commands with each known pattern of lines I want to ignore, but it is tedious and I have to inspect each file afterwards to make sure nothing is left from above "Initialization completed
I'm using GNOME terminal to SSH into a Debian server and would really like a way to paste multiple lines of text into configuration files (using the nano text editor if possible).
So far whatever I try dumps all the text onto a single line meaning I have to manually go through inserting line feeds which is tedious and can introduce errors.
Is there a way to paste text with line feeds intact? Rather than copying each line individually?
i have a big file of random numbers i generated at some point in time, after working with it with different things(how fun that was)... i want to remove duplicate lines and i'm not sure i'm doing this right
I wanted to get rid of ubuntu splash screen, and edited the line GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash text". Now it won't load at all. It nust shows the same purple background without ubuntu logo, then turns black and shows few lines of text. After displaying "Checking battery state... [OK]", it freezes and does not move further. I can't even.start a recovery mode because I have no dual boot and do not get this opfion at the start.