I need to grep a pattern which can be present in one line or could be split in 2 lines.Normal grep wont work in this case. Can anyone please help on this?There are 100's of files in which i need to search for this pattern so time is also a constrain.
I've come across an unusual requirement for a service in my Ubuntu system.Simply put, I need to find a way to search for all instances of a term in a file, delete lines containing containing that term, and delete four lines below each instance of that term. ither that, or copy the entirety of a file to a new file and skip over all lines containing the term plus four below it.This sounds kinda weird, I know. Without going too far into detail, I either have to change the logfile format for a server I'm running which is a huge pain in the butt, or I can just run a script to edit an HTML report generated from said logs. (Said report is really just for managers to peruse, and I like my log format, so I'm pursuing option 2.)
I have been experiencing a problem where the screen loads and after initial first few lines breaks up into multiple repetitions of lines. Reloading helps but has to be repeated when pageing down. Mail is no problem; it is supplied by my network provider. OS is openSUSE 11.2 which I update when advised. Below is a sample from the error console:
I've just installed Kubuntu 11.04, switched on wobbly windows effect. It runs very smooth on my Nvidia GeForce 7600 GS with dual screen twinview turned on. However, I get these lines when I drag/move the window upwards - see screenshot:
I have this massive table file with some data in it and I want to replace some lines that are wrong with the correct ones that are in another table file of the same format. The wrong lines are not all together in a block but randomly distributed so I need to make a loop checking if the line is in the other file and if it is, replace it. I want to try and do it with sed or awk but I don't really know how to....
I'm trying to upload my tar'ed site to server but I have upload limit. My ftp program extracts tar after uploading it. How do I split this one tar into two so I could upload one tar and let it extract itself, and then upload second one and let it extract itself too?
i have a folder of size 50 gb in my fedora 13 system. i want to split the folder into 5 pieces of size 10 gb each. how to do that? purpose is i am going to gunzip this folders seperately and going to write in dvds
when i attempted to gunzip that 50 gb folder,i got 12gb tar.gz file which is not equal to the size of a dvd. then i dont want to split that file.
I have a 7 GB VOB file which I created from a DVD using ffmpeg dump to remove CSS protection (it is legal where I live to do so). Now, I want to create a DVD/.iso that will be understood by regular DVD players/appliances. How do I do it?
Is there a single key strike through which i can do it ? like going to the word "to" and striking that key will put rest of the words in new line. ( i want to do it in normal mode , not in the usual insert mode where it obviously can be done by typing <Enter> )
i have created two physical volumes, later added volume group to it and then created logical volume and formated the logical volume n mounted it on directory now now i wanted to split the volume group but am unable to do it.If i tries it error msg displayed as existing volume group is active and i have to inactive that volume group
Is there a tool already out there that will split error logs based on the virtual host they belong to? Or perhaps a somewhat simple way to write a script that can do this? I'll keep looking for a solution but I thought I'd ask in case someone here has one to offer.
How can I split an output of a command to two terminals? one will get stdout and the other will get stderr. The best I could do is: On first terminal code...
This works ok but it prints the errors over and over again every time, is there any better way to redirect the errors to another terminal?
I have a file with 5 columns. Column 4 contains numbers.Is it possible to split the file into multiple files using a condition for the contents of column 4 i.e if column 4 contains a value between 0-10 then print the lines to a new file called less_than_10.txt
I have a text file of n-number of tab-delimited lines ("INPUT") which I would like to parse line-by-line to a text output file depending on the SampleID of the line. These lines contain a unique SampleID and each subject has several lines of data.
[code]....
I also have a text file of relevant SampleID ("INPUT2"). The basic idea is that I read a line from INPUT, split the tab-delimited line, extract the SampleID from the split line, compare the SampleID of this line to my list of relevant SampleIDs. If there is a match, then print the line from INPUT to OUTPUT, then move on to the next line of INPUT. Alternatively, if there is no match, then move on to the next line in INPUT. I tried to script this (extreme newbie at perl right now) and failed miserably, but here is what I have at the moment:
I have a very large directory with probably millions of small files in it. It's taking forever to run ls on the directory.
Is there an easy script that I can run to split the directory into smaller ones, based on the prefixes of the filenames. My goal is to wind up with something similar to what the Debian archives' pool directory looks like.
I need to split up a large file on windows so I can upload it in parts to a linux machine. I'm looking to do the opposite to this hopefully with some native utilities to keep it simple.
I understand the linux side of the equation to be cat filea fileb > file
what is the simples way to split files on a windows machine which can then be joined together via cat on a linux machine?
I know that one can use ffmpeg to extract a smallfile.avi from a largfile.avi. But What I am looking for is an tool/command to split a large file into several files of a given size.
I am backing up parts of my computer with DD, and i was wondering if there was a quick way to split the files created into 4.4GB sized files that will fit onto a DVD. Anyone have any idea of how to do this?
so I have a perl script that contains an array like this:@hostNames = (ABC123R:192.168.1.1, CBA321CBP:192.168.1.2, ZYX987R:192.168.1.3, etc firstelement"ABC123R:192.168.1.1":ABC123R is the hostname and 192.168.1.1 is it's IPaddress.I am trying to write a regular expression that will split the element with a '-' wherever there is a LETTER next to a NUMBER, like so:ABC-123-R:192.168.1.1I tried this expression below but am struggling with using regex for slightly complicated matching criteria:
I need to transfer a 4Gbyte file from my Linux netbook to a friends WinXP desktop. And I'd like to it with a usb flash drive, but it can't handle a file larger than 2Gbyte. A limitation due to the underlying FAT32 filesystem. But I don't wish to reformat my usb as ext3 either.
So I need to split my 4GByte file into smaller chunks. And the 'split' utility needs to be available on both Linux and the WinXP operating systems.
I used split -b 32m "file.bz2" "file.bz2.part-" to split a file and it created more than 50 parts. From googling, the way I found to reassemble the parts is to cat file.bz2.part-aa file.bz2.part-ab > file.bz2, while enumerating all the 50+ parts. Is there an easier way to reassemble the parts wherein I no longer need to list all those parts explicitly?
I've a file with a size of 6GB. I would like to compress this file and split them into smaller files. I was also thinking in use bzip2 to compress it, because if offers a good compression rate. How can I split this file into small ones to compress it?