I have a requirement to list files using find command My folder contains below list of files with out extention.I have a requirement to exclude only ABC.123.* type files and list others. Even though files having MNO contains this pattern i should not exclude. Even if file ends with .txt or .doc it should not be excluded. That is ABC.123.1234.txt should not be excluded.But I am not getting what is required. Can any one please let me know if I am doing wrong any where. As per my requirement I cannot use grep, -regex, or -regex attributes to find command.
I want to list all the files that don't have a copy with the same filename with -1 somewhere in it. So, in the example above, the results would be 3.png.
NB: the file and its copy with "-1" in it will be the same filesize, if that helps.
I need to add some text using sed before and after the matching pattern. Does any one have any clue?e.g.cat /my/file | sed -e "s/first pattern/New Pattern/g" . /my/file.bakNow I need a result like New Pattern
i tried searching on google but found it difficult to say exactly what I was looking for.Task - Capitalise x number of letters at the start of words.eg. Original line - one.two.three.fourRevised line - One.Two.three.four (here only requiring 2 changes)Test data:
I have a question about sed programming, actually a one-liner for which I cannot find a solution, right now. I need to delete a line matching a specific pattern only if it is the last line. In practice, I would put together the following:
I've written a script to parse a file and print each line that ends with matching pattern, if the next line is blank. The pattern lines are the result of md5sum $i|sed 's/path///g' so that only md5 and filename appear. Here's what I'm using.
Quote: for fline in `sed -n '/.*.ext$/p' file1` do if [ "`sed -n -e '/'"$fline"'/ {n; p;}' file1`" == "" ] then echo ""$fline" has no info" >>file2 fi done [Code]....
Are there some good tutorials or reference materials on how do pattern matching and text manipulation in Linux?I have a few simple tasks I'd like taken care of...like formatting numbers in file names, stripping some text from directory names, etc
I am interested in the following problem: given a string (pattern) find a regexp which match this pattern. I will need this for a developing of an idea 'pattern based filtration'.
I can't seem to find a way to do this, so maybe it's not actually possible, but I was wondering if there was a way to list all files that link to a file.
For example.
touch a ln a b ln a c
I want to find out what files link to a (not symlinks, mind you), assuming that this is more complicated (they are spread around to different directories).
I kind of understand about the filesystem storing links in one area and data in another, so I understand it probably takes more work to find a link from a file location than the other way around.
How would I limit this to searching for the text 'SomeString' or 'SomeOtherString', but only if the file has extension .php, .inc or .js? Also - what piping to xargs does here? I don't understand how this command actually works.
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
I have 2 massive duplicate dirs of the same format as below: dir1 subdir1 file1 subdir2 file1 subdir3 file1 ...
Dir2 is the same, but it has some newer files of the same name. I want to copy all file1's from Dir2 to the same name and folders in dir1. So basically something like: cp -pr bkpDir1/*/*-big.gif Dir2/*/*-big.gif
This works for singular cases: cp -pr bkpDir1/uniquesubdir/*-big.gif Dir2/uniquesubdir/*-big.gif
But not for wildcards: cp -pr bkpDir1/subdir*/*-big.gif Dir2/subdir*/*-big.gif
Anyway the aim is to do the first cp above, I have tried a few options using find. In trying to show an example stumbled upon a way that worked, while in dir2: find */*-big.gif | xargs -i cp -rp {} ../dir1/{} Sure there are better ways also...
I am writing a shell script that finds all files named <myFile> in a directory <dir> or any of its subdirectories, recursively. I also need to take care of symbolic links that may form cycles, to avoid infinite loops. I am not supposed to use find command for the same
I started writing the code but got stuck. I thought using recursion may be a smart way, but its not working.
its a very basic question but iam not getting it right nowi have to list all the pdf files on my desktop even the pdf files which are present in folders on the desktopls *.pdfonly list the files present on the desktop, but not the files in the folders on the desktop containing the pdf files.
What is the best and simplest way to compare two directory structures without actually comparing the data in files. This works fine: diff -qr dir1 dir2 But it's really slow because it's comparing files too. Is there a switch for diff or another simple cli tool to do this?
From this directory, I want to know how I could use grep to display files based on part of their filename - for example those starting with "Account" or those ending in ".sh".
I want to go through a log file and find pattern1 and then a pattern2 only after pattern 1.So for example I want to know howManyRecords was in 13:30.I figured I grep for "start time for the job" and then only after that (and before the next occurence of that) grep for "howManyRecords". Is this a sane way?
suppose in my current directory, I have 50 sub-directories. Now, I am interested only in about 20 of those sub-directories (whose names match a pattern). I would like to recursively list the contents of these 20 sub-directories. How do I do that ? I would like to do this in Solaris 10 and Linux(RHEL 5.x).
I'm trying to make a shell script that will list the 50 newest files in a directory with several subdirectories in. I've been trying with the find-command with no luck and now I've figured I should probably use ls. The problem is when I do "ls -lRt | head -50" it will do 1 directory at the time. It will not first make the full list and then sort it. This will display all items in first directory, sorted, then the newest directory will be sorted and displayed. So I figured I have to sort the whole process of ls before I limit the head. So this is where I am at now: ls -lRt | sort <something clever here> | head -50
Only doing a "|sort|" will sort it by name if I understand it right and I don't know how to solve it. Here's also my first attempt if that is of any interrest or help, this was limited by the change status time of files (so some lists got very large). These lists dit not get sorted by time and I could not find any way to do so.find $ftpDir -ctime $time -type f -print > $ftpFileLsAny help on this would be appreciated since I'm sort of stuck now. After reading manuals for all the options I can think of and still there's just a big blur in my head..
A function by name abc is called in many files. I want to copy all the lines with the function call to an output file.A simple grep on function name doesn't help me as the function call is spanning across multiple lines as follows:
abc(parameter1, parameter2, parameter3);
So I want to copy all the three lines (till semicolon) to the output file.The problem is because there are more than 200 calls for the same function and I cannot do it manually
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far: