I'm trying to output two certain fields of a very large text file to 2 very small text files. Then take those files and add all the lines together to come up with a total from each file (two totals).
Breakdown: Put 0 in a text file to be drawn by respective while loops for math later
Output last 60 integers to a file for total A (new integer every minute)
Output last 60 integers to a file for total B (new integer every minute)
The two while loops are supposed to be adding the lines together. The echo commands at the end are for testing purposes, just to see the output. However, when I run this, I get the output of
Code:
0 0
Which is obviously not what it's supposed to be. Is there a more efficient way to do this or am I missing something in the script that would reset the values to "0"?
I have this file and i need a command to permanently add a line of code to the file and sort the file by ID. I was able to add a line with the echo command but its not permanent
I need to insert 3-4 lines of text to the beginning of a text file. The file is a largish MYSQL dump, the result of a backup shell script. This shell script should insert the required text.I've wrestled with sed, but lost.
I have this massive table file with some data in it and I want to replace some lines that are wrong with the correct ones that are in another table file of the same format. The wrong lines are not all together in a block but randomly distributed so I need to make a loop checking if the line is in the other file and if it is, replace it. I want to try and do it with sed or awk but I don't really know how to....
I'm looking for a way to insert the number of lines in a file to the start of the aformentioned file. This should be simple but as I am not used to scripts in Linux, I am finding it tough going. I can find the number of lines in a file easily enough via
filesize=$(awk 'END {print NR}' $1)
but as for inserting this into the first line, i'm failing to do so. I've tried some of the other approaches on these forums but none so far have been able to do so.
I've tried:
sed '1i$filesize' $1
but sed i requires a string, not a variable so no go I've also tried:
but again with no luck as cat seems to need an input stream Just to recap, i want to insert a line at the start of a given file that holds the number of lines the original file has.
I am using RHEL 5.I have a very large test file which cannot be opened in vi.The content of the file has some 8000 lines.I need to view ten lines between 5680 to 5690.How can i view these particular lines in a large file.what is command and option i need to use.
I am trying to install Koha on centos5.5. Afte installing myqsl it starts normally. But when i add following lines into my.cnf and trying to restart mysql again it says that restart is failed.
here is the lines that i add into [mysqld] section
I have a large text file containing over 180k lines and another text file containing about 1k. I would like to remove lines in the 180k-line file that exist in the 1k-line file.
I would like to modify the content of a text file in Linux, in the following way:=> the file has several of these lines:./run_pest3 ./g134366.04080_0.062 x 2_d043 1 0.43 results_EC=> I want to modify all lines to be:./run_pest3 ./g134366.04080_0.062 x 2_d043 1 0.43 results_EC0.062i.e., the last number of $2 should be "attached" to the end of $7, for each line.
I'm having a file with repeated particular text lines. So I need to view the file content ignoring these lines. Is there anyway I can achieve this using VI
I am facing some problem regarding deletion of a line from a text file. The file consists of the lines of type which consists of more than 6 occurrences of : character in it. The line should be deleted completely and the line next to it must be shifted up.
I have a file that contains 100 ligns, i need to write a script that read 70 lignes and redirect those 70 ligns to another files and these 70ligns have to be erased in the first file
when i write this command head -70 somefile.txt>test.txt or
sed -n 70p somefile.txt>test.txt
i have these 70 lines in the text.txt files but these 70 lines have to be deleted inthe first file somefile.txt
i've got a file with sorted words - one on each line.How could it be possible to delete thouse lines that have words of length 1 or 2 (1-2 letters). I guess a good way it will be with AWK, n its fuction length(), but getting it, i dont know how to delete those very lines.
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
I'm looking for a command to swap the even/odd numbered lines in a file. Example input file:
Code:
1 2 3 4
[code]...
Example output file:
Code:
2 1 4 3
[code]....
I'm sure there's a way to do it with sed, awk, grep and the like but it's been many years since I've used these commands on a daily basis and I can't seem to figure out the correct syntax.
I am currently using a command like this to remove blank lines and lines which contain (not necessarily begin) with a #. Is there a better/simpler command?
cat /etc/apache2/default-server.conf | sed /^$/d | grep -v '#'
Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
i need to count the number of files and put the output into a variable. i used wc -l filename but i couldnt find an option to put the output to variable. example if the number o line is 5, i need the output of echo $x is 5.