I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.
I have a very, very large log file (360MB) that I'm trying to thin out. As it turns out the majority of this file has entries that aren't necessary so I'm attempting to build a command that will strip these out. The following command works to display only the data that I do not want:
This displays exactly the data I want to delete from the file by displaying the expression and six lines above it and five lines below it. However I'm at a loss as to how to remove this data from the output and display everything else. I looked into the -v option with grep redirecting the output to a new file:
However it doesn't work, the new file is the same size as the old one. What am I doing wrong? Is there a better method of doing this? I'm a bit out of my element since the method I'd normally use can't handle files of this size.
Order of these lines are random... So I cannot delete line #19, for example... And you can see that top four lines I want to delete are pairs. So there might be some clever way to detect the lines, if a line has both "1.9" and "1.11", then delete the line... I am new to perl language. The following is the code I have now... I think I just need to write some code inside the while loop checking if I want to delete the line $dotline before I write to a NEW file.
I'm trying to split a text file into various parts. Everything in between "123" and "break" (including linebreaks) goes into the splitted file.
e.g. using this text file:
This should split into 4 files. However I'm only getting 2 files: one for the line "123break" and one for "123 blah break". The two occurrences that contain linebreaks are being ignored. The .* part of my match should capture linebreaks seeing that I'm using the /s modifier shouldn't it? Even when I use the match /(123 break)/gs it still doesn't capture the first occurrence. I'm using Perl v5.12.3 (from ActiveState) on Windows XP. The text file is also in Windows format.
Code listed below.
The above code generates two files Output_1.txt and Output_2.txt which contain "123break" and "123 blah break" respectively. I want it to generate four files.
For example, I have a text file with data which lists numerical values from two separate individuals
Code: Person A 100 200 300 400 500 600 700 800 900 1000 1100 1200
Person B 1200 1100 1000 900 800 700 600 500 400 300 200 100
How would I go about reading the values for each Person, then being able to perform mathematical equations for each Person (finding the sum for example)?
I have written a regular expression (tested in regexpal and regextester alpha something) with which I want to replace something like code...
but it only matches functions which occupy one line only, despite my tests showing multiple line matching in javascript testers online and using the m and s flags (which should make it multi line no?)
Each line of the file I am sorting is in the following format:
<url> <month> <day>
For example:
[URL]
I wrote the following to sort:
Code:
#!/usr/bin/perl $in = shift; chomp($in);
[code]....
The script worked fine for my small testing files, but failed in my input file. The input file is 18MB and containing more than 300,000 lines. The output will contains some lines like that:
I want to remove duplicate or multiple similar lines from multiple files. I.e. if I have four files file1.txt file2.txt file3.txt and file4.txt and would like to find and remove similar lines from all these files keeping only one line from these similar lines. I only that uniq can be used to remove similar lines from a sorted file.
I have been experiencing a problem where the screen loads and after initial first few lines breaks up into multiple repetitions of lines. Reloading helps but has to be repeated when pageing down. Mail is no problem; it is supplied by my network provider. OS is openSUSE 11.2 which I update when advised. Below is a sample from the error console:
I often use the rpl command to make changes to multiple html files at once. For example:
rpl -R '<br />' '<br /><br />' mydirectory However, I haven't been able to figure out how to change multiple lines. For example, let's say I want to change all occurrences of :
How to search multiple words in multiple lines, inside a directory including sub-directory? Pls. give easy example. I want to search the files (in /xx folder and all subfolders) that have header.h included and used x() function. I tried $grep -r "header.h" | grep -r "x(" /Folder/subfolder/ > search.log
I'm writing a script to replace some text that exists in about 50 .lex, .y, and .cc source code files, sometimes more than once in a file. Sometimes the text is in a multiline C comment, and other times it's within a multiline C string.
I use sed to grab the start and end of each line and wrap the new text in the old whitespace and/or quotes and Problem is, sed is changing the characters into a newline.
Is there a way to tell sed to not process escape sequences? I tried using several variations of
Code:
To no avail. Or could it be bash?
I would give up on the script and do it by hand, but this is something that I must do from time to time.
Here's the function which replaces the first occurrence found:
Code:
When $post is printed by echo, it shows the - but by the time the file is on disk, it becomes a newline. What should I do to ensure that it stays as the characters ?
I want to create an alias or function that when used prints something like this on the command line so I can further modify it before pressing enter myself.
Code: $ FILE=exercise1; cc -o $FILE $FILE.c && ./$FILE; FILE= The idea is that I'm studying c and want to change the name of the file once instead of
I have several (vhdl) files containing a pattern with newline characters that I need to replace by another pattern that also contains newline characters.
I start with something like:
Code:
I want to replace it by something like:
Code:
(I need to paste some lines)
As I need to do this (very) often I want to use a shell script.
I am using ubuntu and mysql.I have a list of many .sql files, like 1.sql, 2.sql, 3.sql ... 100000.sqlI need to insert them into the database mysql mydb < *.sqlGives me: -bash: *.sql: ambiguous redirect
I can't get x to work with a mouse so I have to use a windows computer to do that from for now. The problem is I remember there being something about windows using a newline AND carriage return and linux just using a newline. I was about to cut and paste code but the lines go on and on instead of breaking off where they did in linux. I was going to write a perl script but don't know how to add a carriage return to the end of each line.
Using xsel I pass a selection into a variable. I then check that the variable includes an embedded newline to be sure that the selection returned by xsel is complete. If the selection content preceding the newline is just a single word, the check fails to detect the newline, thus
I'm having a hard time figuring out why the program posted below prints an extra newline every time I type the enter key.This program is using the master pseudo-terminal to send the password and receive the output from the slave(connected to the passwd program).I suspect this has to do with the terminal line discipline(s)(2 considering the master and slave), but I can't really understand why.I have tried turning on/off several terminal special characters but to no avail.
I need to either locate a script that is similar to what I am needing or figure out a better way of doing what I need. I have mutiple shops with AIX unix servers, using ksh with virtual terminals that connect. since these are on an internal network we have them connecting to the server as either usr01, usr02, etc. what I need to do is add 15 user's ranging from usr01-usr15 into /etc/passwd each usr is identical in such that each line contains
Code:
usr01::0:0::/usr/tops:/bin/ksh
only difference is the usr# changes. I wrote a script where I was just adding these all to the /etc/passwd but now I have been tasked with adding them to these shops but with out any duplicates. is there any way to have a script check the file to see if the usr# exists and if so proceed to the next number and then input the usr#::0:0::/usr/tops:/bin/ksh into the file?
I'm gonna replace my machine's ip address and hostname using awk command. the pattern of the file is like the following...ip address="192.168.1.100"the script must ask the ip address from the user and replace it with the ip address in the quotation.
I'm writing a C program, and using Autotools. I have a large text file that I need to include verbatim, as data for my program.I used to have a hacked-together Perl script that would take a file like this: Code: A Rabbi, a Priest, and a Minister walked into a bar.The bartender said, "What is this, a joke?" I found this site that contains instructions for doing exactly what I want, but that technique requires GNU's ld, and the whole point of using Autotools in the first place is to make my project platform- and compiler-independent.I should point out that, according to the Autotools help, I can do this with a script called either "txtc.sh" or "txtc.sh.in". Unfortunately, Google can't find such a script, and it's not in any package that I can find.
In GUI style editors, you can generally select multiple lines, press tab a few times to move all the lines across (or shift-tab to go back). I have no idea how to do this in VIM.I googled around and couldn't find any straight answer to I came here.
alright, i have a LFS script and im pretty sure i know how to use it correctly, the only big iff i have with it up where it says WGETLIST="" should i insert the website where all the packages are contained? and the same would be assumed for the MD5 checks as well.im just a little lost as to what should go in between the quotes.
I'm familiar with load balancing.. but Is it possible to actually bond multiple DSL lines together? I hear of ways to bond using MLPPP but that requires support from an ISP. Is there a way to actually bond without support from my ISP, or use say a cable modem and a DSL line together for faster speed / diversity?