Every now and then I have to indent the lines in my script to 4 space characters. I generally do it line by line. Is there an automated command in vi using which I can indent some set of lines to desired number of space characters in one go.
i have two files with thousands of line, I am trying to combine these two files but i want to insert each line of one file to the another file after certain lines. I am using awk with the following command but it does not work.cat file1 | awk ' { print $0; if (NR%3004==0) {print "file2"}}' > outputfile
For example, I have a file called "file" like this one: type=strongsubj len=1 word=absolve pos=verb stemmed=y priorpolarity=positive type=strongsubj len=1 word=unique pos=adj stemmed=n priorpolarity=neutral type=strongsubj len=1 word=absolutely pos=adj stemmed=n priorpolarity=neutral type=weaksubj len=1 word=taking pos=verb stemmed=y priorpolarity=positive type=weaksubj len=1 word=friend pos=noun stemmed=n priorpolarity=positive type=weaksubj len=1 word=usually pos=adverb stemmed=n priorpolarity=positive type=strongsubj len=1 word=purecolor pos=anypos stemmed=n priorpolarity=negative type=strongsubj len=1 word=accusingly pos=anypos stemmed=n priorpolarity=negative
I want to add the plural for the noun, for example if find this line: type=weaksubj len=1 word=friend pos=noun stemmed=n priorpolarity=positive will add one more line : type=weaksubj len=1 word=friends pos=noun stemmed=n priorpolarity=positive where we add "s" for the word friend I did try to do like that: <code> cat file | while read LINE ; do
set -- ${line} if [[ "${4#pos1=}" == "noun" ]];then #I tried this line but it doesn't work properly: v3==$(echo $line |sed 's/$3/$s') #I want to find the third word "word=friend" in that line and add "s" after that word # I don't know what command to add this new line "$v3" to the file ??? done </code>
I have a plain text file with 360 lines of varying length text. How do I add a comma or other symbol to the end of each line so that I can convert the file to csv format that I can open in a spreadsheet (45 rows, 8 columns). That means each 8 lines of text forms 8 columns, with 45 rows.
Using awk I pull the first field of a random line from my datafile.myvar1=`awk -F" " 'NR=='$randline' {printf "%s", $1}' myfileThis works fine. The problem is there will be empty lines at the end of the file. Rather than using awkto filter out blank lines I would like to figure this out first.So I test $myvar1 for a blank string after setting $randline to one that I know is blank:test -z "$myvar1" && echo "true" || echo "false"But, this returns "false"? So the string is not zero length. Why? It's a tab-separated file. Is awk storing the tab with the $1 field or something.This is where I get headache. I try to echo my variable to see what it looks like.
echo "$myvar1" outputs: nothing echo "My variable is [$myvar1]" outputs: [y variable is [
Why is the closing bracket at the beginning? What character could be stored in $myvar1 that would do such a thing and how did it get there?
I am trying to read certain lines within a file and give the output of the certain lines that dont equal my value, I think showing you would be easier. There is multiples of these inside one file...
Code:
LV Name /dev/vg00/lvol1 LV Status available/syncd LV Size (Mbytes) 300lable/syncd
[code]....
I want to read everything in the file, if the status is not available then it should display the name (directly above status). If they are all availbale then do nothing. I think I know how to do it which includes putting the info in string form and placing in hash but it is proving to be out of my skill range.
To save on the writing of WAY to many files with very little in them, I want to put it all in one file and read a specific few lines. There will be six variables to be read at a time. Format is as such:
//Set 1 string name 5 12
[code]....
From name to 5th number is a set. The name will be of different lengths for each set. This will be a big file of probably 40+ sets. My problem lies in reading one and only one set be it set 5 or set 34. It needs to be done in C++.
I'm trying to come up with ideas for a simple way to strip a specific "entry" from a text file.I know tools like sed and perl can remove specific lines from a file but I haven't been able to come up with an elegant way to do my group of lines.In my file, the first "Location" line and the "SVNPath" line should be unique every time... but are they enough to strip out the whole set of the group plus the trailing one line of white space separating each group? Add to this, my file will grow as new entries are added (always appended to the end) but new entries will have the same formatting.
I'm trying to extract specific lines from a flat file. I need lines that fall within a range of coordinates. The -F can be either ! or = If the line is in this set range I need all of the data on that line. ranges lat 36 to 39 and longitude -74 to -84
I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.
I have a very, very large log file (360MB) that I'm trying to thin out. As it turns out the majority of this file has entries that aren't necessary so I'm attempting to build a command that will strip these out. The following command works to display only the data that I do not want:
This displays exactly the data I want to delete from the file by displaying the expression and six lines above it and five lines below it. However I'm at a loss as to how to remove this data from the output and display everything else. I looked into the -v option with grep redirecting the output to a new file:
However it doesn't work, the new file is the same size as the old one. What am I doing wrong? Is there a better method of doing this? I'm a bit out of my element since the method I'd normally use can't handle files of this size.
Order of these lines are random... So I cannot delete line #19, for example... And you can see that top four lines I want to delete are pairs. So there might be some clever way to detect the lines, if a line has both "1.9" and "1.11", then delete the line... I am new to perl language. The following is the code I have now... I think I just need to write some code inside the while loop checking if I want to delete the line $dotline before I write to a NEW file.
I a csv-file (A.csv) with a total of 4.600.000 lines. Thats to many and only a few is necessary. I have a txt-file with 150 lines (X.txt) (all lines is dataset from a mainframe and looks like abc.def.123.456. How do I remove lines from A.csv where none of the dataset from x.txt is present?
I have a file with three consecutive blank lines. I want to delete two and keep one.Also, if anyone could direct me towards a guide on regular expressions particularly as they apply to sed, I would be grateful. I am having a hell of a time figuring out the syntax.
Consolidate several lines of a CSV file with firewall rules, in order to parse them easier?
I have a .csv file, which I created using an HTML export from a Check Point firewall. The objective is to have all the firewall configuration lines where a given host is present. I have to do this for a few hundred, manually is not a reasonable option. I'm going to write a simple Python script for this.
The problem is that the output from the Check Point firewall is complicated to work with. If a firewall rule works with several source or destination hosts, services or other configurations, instead of having them separated with a symbol other than a comma, I get a new line.
This prevents me from exporting the line where the host is present, since I would be missing info.
Let me show you an example, hostnames are modified, of course:
For example, I have a text file with data which lists numerical values from two separate individuals
Code: Person A 100 200 300 400 500 600 700 800 900 1000 1100 1200
Person B 1200 1100 1000 900 800 700 600 500 400 300 200 100
How would I go about reading the values for each Person, then being able to perform mathematical equations for each Person (finding the sum for example)?
All I want is a command that reads one data file with several columns and prints it in another one.However, whenever the value in one specific column alters, it prints one empty line in the new file. For example, consider the file
Each line of the file I am sorting is in the following format:
<url> <month> <day>
For example:
[URL]
I wrote the following to sort:
Code:
#!/usr/bin/perl $in = shift; chomp($in);
[code]....
The script worked fine for my small testing files, but failed in my input file. The input file is 18MB and containing more than 300,000 lines. The output will contains some lines like that:
I have a CSV file, which I created using an HTML export from a Check Point firewall policy. Each rule is represented as several lines, in some cases. That occurs when a rule has several address sources, destinations or services.
I need the output to have each rule described in only one line. It's easy to distinguish when each rule begins. In the first column, there's the rule ID, which is a number.
Here's an example. In green are marked the strings that should be moved:
See example. The strings that should be moved are in bold:
Read the first column of the next line. If there's a number:
Evaluate the first column of the next line. If there's no number there, concatenate (separating with a comma) the strings in the columns of this line with the last one and eliminate the text in the current one
The output should be something like this. The strings in bold are the ones that were moved:
i want to grep lines which do not start with # or a blank space. like
bla bla bla bla
how do i do this? i tried grep --invert-match '^#' which gives lines not starting with # but gives me blank lines too i tried grep --invert-match '^#|^ ' which will give lines not starting with # OR not starting with blank ( which means any line including ones starting with #