I have a txt file with couple of comment lines:
Number of title = !num!
#line1
#line2
#line3
I wrote a script with "sed" to replace !num! in this file, which is very straightforward. However, based on the !num!, I want to remove the number of "#" based on the !num! value. Is there an easy way to do that with "sed"; otherwise, i will have to write a script to loop through the file.
I want to be able to remove the first character of a line when I highlight multiple lines in gedit. Example:
%Example is %Commented Code %Uncomment using this shortcut
I would then highlight/select these lines, and remove the first character to make it look like this:
Example is Commented Code Uncomment using this shortcut
I'm pretty sure there is an actual shortcut for this. If there is another text editor on Linux that it would work in, it would be nice to know how to do it in that editor as well.
My server was hit with an injection script which has placed code across many of my clients files. I need a script that can remove a block of php code that spans multiple lines, multiple directories/files and is dynamic, meaning that part of the code changes. I think using find/sed is what I need but cannot seem to figure out how to get it to work.The following is the script that is being injected everywhere. The catch is that they have generated dynamic code at the start/end of the script. (I have commented the parts that are dynamically changing on EVERY instance).PLEASE NOTE: Directly following this script is the start of a valid php script that I do not want to delete.
<?php //{{65281980 - DYNAMIC!! GLOBAL $alreadyxxx;
I want to remove duplicate or multiple similar lines from multiple files. I.e. if I have four files file1.txt file2.txt file3.txt and file4.txt and would like to find and remove similar lines from all these files keeping only one line from these similar lines. I only that uniq can be used to remove similar lines from a sorted file.
How to search multiple words in multiple lines, inside a directory including sub-directory? Pls. give easy example. I want to search the files (in /xx folder and all subfolders) that have header.h included and used x() function. I tried $grep -r "header.h" | grep -r "x(" /Folder/subfolder/ > search.log
I have been experiencing a problem where the screen loads and after initial first few lines breaks up into multiple repetitions of lines. Reloading helps but has to be repeated when pageing down. Mail is no problem; it is supplied by my network provider. OS is openSUSE 11.2 which I update when advised. Below is a sample from the error console:
In GUI style editors, you can generally select multiple lines, press tab a few times to move all the lines across (or shift-tab to go back). I have no idea how to do this in VIM.I googled around and couldn't find any straight answer to I came here.
I've seen a few tutorials that have commands and parameters on multiple line, like the one below:
Code: chkconfig --levels 235 mysqld on /etc/init.d/mysqld start
I can copy and paste this in Putty, but what if I want to manually type it? If I press return, the first line gets processed, so how do I insert a new line?
I need to filter the log from a massive wget. I want to remove the progress lines and only leave the last one. Now each progress line starts with a newline '
I have model output data in ascii format. It contains thousands of lines. The output file contains multiple text lines with variable values. here I copy-paste some of it's contents.
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
eed to make a script to append a line to the bottom of multiple files (only certain files, but 100's spread over directories).Doing a find replace inside multiple files is easy, I use the followingfind /base/dir -name "*.txt" -exec perl -pi -w -e 's/FIND/REPLACE/g;' {} ;So I tried doing the followingfind /base/dir -name "*.txt" -exec echo "Append this" >> {} ;However this just appends all the text into a file called "{}". Whereas {} should be replaced with each file that's found.
I often use the rpl command to make changes to multiple html files at once. For example:
rpl -R '<br />' '<br /><br />' mydirectory However, I haven't been able to figure out how to change multiple lines. For example, let's say I want to change all occurrences of :
I have a CSV file that's created in an application that can't output lines longer than 250 characters. the data fields, all together, are longer than this. how would I remove the line break from every line that ends with a comma? For example:
I need to create a single line of output from multiple and variable lines of input in a Linux bash shell script.
My input file looks like this:
Where there may be any number of umsecondaryphonenumber lines; if there is not a umsecondaryphonenumber line for a telephonenumber, I don't want to write any output.
So, the output file should look like:
The script I have so far is:
My question is - how do print each of the elements of an array in one record - i.e. what do I put in place of howdoiprintarray?
Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
A function by name abc is called in many files. I want to copy all the lines with the function call to an output file.A simple grep on function name doesn't help me as the function call is spanning across multiple lines as follows:
abc(parameter1, parameter2, parameter3);
So I want to copy all the three lines (till semicolon) to the output file.The problem is because there are more than 200 calls for the same function and I cannot do it manually
I have hundreds of directories in various subdirs that I need to remove. I want to remove all of these dirs, but can only find solutions on how to do remove files (or how to remove subdirs from within the current dir).
I think I need something like
find -iname 'testfile*' | xargs rm -i
where I want to remove every directory that contains the word 'testfile' within the directory name. I know xargs wont work for dirs,
I have a directory (Linux user) with a number of files which contain an added [!] to the end of each file name so that each file reads out as: foo something [!].zip bar something [!].zip helloworld [!].zip etc. What is the quickest way to batch rename these to remove the ending [!] character combination from these file names?