Programming :: Find The Orphaned Images Using Grep
Jun 16, 2010
I have inherited a wordpress theme with a folder of images that I think are no longer being used. I wanted to find the orphaned images using grep, so I wrote this script:
Code:
#!/bin/bash
echo $PWD
for i in *.*; do
cd ..
[Code].....
Its seems like I got some false positives out of it, but it worked pretty ok. I guess. :| Of course, it is not checking for images in the content of the database.
Orphan finding has to be a wheel that is already invented.
I'm trying to find exact matches of some users in the /etc/passwd file using "grep -w", but it doesn't always work. For example, I have the following users:[URl].. So, let's say, I want to search for the user "stewart" (which doesn't exist)
I'm trying to math all class references in a C++ file using grep with regular expression. I'm trying to know if a specific include is usuless or not, so I have to know if there is a refence in cpp. I wrote this RE that searches for a reference from class ABCZ, but unfortunately it isn't working as I espected:
grep -E '^[^(/*)(//)].*[^a-zA-Z]ABCZ[]*[*(<:;,{& ]' ^[^(/*)(//)] don't math comments in the begging of the line ( // or /* ) .* followed by any character
[code]....
Well, I can get patterns like this:
class Test: public ABCZ{ class Test: public ABCZ { class Test : public ABCZ<T>
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
I have a server hosting 100+ websites. I need to quickly identify which websites are configured with a database. There are way too many to manually check every website for a PHP file with a database name. So, I created a list of all databases from MySQL and put them in a text file. I then exported the text file to a shell variable and used it in a for loop.
bash variable
Code: DBLIST=`cat dblist.txt` Example of $DBLIST
Code: db1 db_testing2 database_clientname production words4cheap for loop
Code: for db in $DBLIST; do find . -type "f" -iname "*.php" -exec grep -i $db '{}' ; -print; done Note: my find statement starts searching at . which is the directory that contains all of my websites and their data, each website is setup in a sub directory, identified by it's domain name.
Example: I'm in /var/www. Beneath /var/www are a list of directories:
[URL]
However, this is taking too long (it's been running most of the day) and I was wondering if there wasn't an easier way to accomplish what I'm trying to achieve?
I am looking for all the files that contain the text string 'moo.sql'. I ran the following:
find . -name '*.php' | grep -lir 'moo.sql' *
Unfortunately it seems to return non-php files in addition to php files. I thought the find portion of this would filter the file names so grep would only search php files.
For searching a file or directory i normally use grep command. kindly can you guide me the difference between grep and find command. I have used both but that are the difference between them ? are the same or grep is new as comapird to find command.
I'm trying to use grep to find the words in the dictionary that contain the letters "th" and the letter m.
I tried grep 'th m*.' Desktop/Dictionary/words(Thats where the destined dictionary word document is located)
grep 'th' Desktop/Dictionary/words works but only for the words with th. I have no idea of what expression to use to make it a unionized expression with m
I want to find files containing the "$" char (ascii 0x24). 'Grep -irl $ *' would output the names of every file in path *, of course, because it means end of line (EOL). So giving grep the string "$" won't do. So I tried 'grep -irl $ *'. But this doesn't work either and I do not understand why. Am I not escaping the dollar sign? grep should interpret it literally. Neither 'grep -irl "$" *' will work. Fortunately, there's LQ, besides grep's man page.
Just after the booting process is finished, running ps I get this:
Code:
As you can see, there are a lot of names beginning with the letter 'k'. Are these processes needed for the GUI to be fully functional when it is run (X Window System + WM + Xfce4)? And can I setup the system such that they're never run?
The reason for this question is that I am testing some programs and need the CPU to have as little time stolen as possible.
I am new to linux as well as awk, grep or sed. I need a find and replace command single liner or script that loops trough input file (file1) and find the particular input in file2 and add "!" in front of the found string.
Example: input file: file1 g+h=o+p a+b=c+d file2 (file that need to look for) a+b=c+d1e105 x+y=z+s5e105 g+h=o+pabcdefg t+r=w+qxvyderf
Output file (file3 should look like this) !a+b=c+d1e105 x+y=z+s5e105 !g+h=o+pabcdefg t+r=w+qxvyderf
I have tried many awk and sed method of find and replce but it did not work the way I wanted. This is mainly due to my lack of experience in awk and sed. The program should loop trough file1 and find in file2 and output in file3 for the 1st (g+h=o+p) set then repeat the same process again for set 2 (a+b=c+d).
I am bussy with a litle bash script but i have now a problem.I have a file on the server with every time different text.Somewere in this text the is the following line:PHP Code:<BR><DIV CLASS='itemTotalsTitle'>2 Matching Service Entries Displayed</DIV> I want to make a bash script that replace this line when it says:"0 Matching Service Entries Displayed"To a other text like:"There a no knowing problem(s) on this moment."]If there is a other number than "0" than replace this line with:2 problems have been found on this moment, whe are bussy to fix this problem
When I used the find command, I almost always need to search the local drives. But, I almost always have super large network shares mounted and these are included in the search. Is there an easy way to exclude those in the find command, grep and other similar commands? Example:
I have some big files of logs that contain errors printed by an app. They are most of the time relevant, however most of them are similar. So i figured i could check what happened between a time interval with a find.
Im using this one
Code:
And I get an output similar to this one.
Code:
Is there a way to condensate the output lines to get only one or two, indicating the start and last occurrence of a block? Or I need to create a program to do so?
Because right now I get thousands of similar lines, but when I'm scrolling through them i sometimes miss relevant information that i would've otherwise noted if it wasn't all that spammy.
If i click on places. Instead of being able to look at my pictures/music/home folder etc.Eye of gnome loads up and says it is unable to find any images?How have i done this? Is it a ubuntu bug.Any tips on how i can resolve this.BTW i can still access these folders if i go in thru shotwell. So the folders themselves are working.
I have a folder that over 70,000 images (within it is a complex hierarchy of subfolders). Of these 70,000 images I assume that I only have ~10,000 unique images; the rest are copies that have been resized. I would like to somehow delete all of the resized copies of the larger originals and remove them, keeping only the original image.
Is there a way to use imagemagick (or any other application) to scan this folder recursively and determine which files may be (resized) copies?
I am trying to monitor how long an ldap search takes and maybe notify or something that a search takes longer than say 10 seconds.
Code: tail -n 1000 /var/log/ldap.log for SRCH in $( cat monitorldap.log |grep 'SRCH'); do echo search string is echo $SRCH
[Code]....
ok, so to start off with it doesn't appear to get the whole line, just a piece "Aug". How can I get the whole line into a variable so I can then cut it up into the pieces I need?
I'm using Zabbix on which I can use give bash command to the agent.This 1-liner will give me all the interfaces with their IPv4 addresses.I have a 2nd expression which returns a checksum so I can detect a difference whenever someone deletes/adds/changes an ipv4 interface.This is the output on my Ubuntu-server:
1) I need to search a field value to check for exact 0. If the number is 0, it should throw error.
The line to be searched looks like as below. "Output Rows [1], Affected Rows [1], Applied Rows [1], Rejected Rows [0]"
Here I have to search whether the affected rows is 0. But the code below picks up other values also (lie 10, 20.. etc). How do we write to get an exact match for 0? Code: affected=`echo ${line} | cut -f6 -d" " `
affectedcount='echo ${affected} |grep 0 ` 2) Also, I need to check whether the rejected rows > 0 Code: rejected=`echo ${line} | cut -f12 -d" " ` rejectedcount='echo {rejected} |grep [1-9]`
3)Can we combine these two statements in a better way to get the desired results?
I want to see if all the records in the file are present in the contents of the files of a particular directory.
Basically I want to say if grep doesn't return anything, then report.
For example in /tmp dir I have 4 files and flast 2 values (787862348 and 766428634) are present in the files of /tmp dir, but first one (979798707) is not. I want to echo that in a reporting file.
something like:
while read line do # if ! grep -rl $line /tmp echo $line >> are_not_present done < "myFile"
How do I achieve " if ! grep -rl $line /tmp"? That is, if the line is found by grep, then grep will print the output, but if grep does'nt find it, it will print nothing. How can I check if grep didn't find it (i.e. printed nothing)?
I'm just starting out with bash scripting (yesterday, really). I want to add a file to each user's home directory, pretty simple really, and send it out via our Apple Remote Desktop system to our Macs. Here is my script: Code: #!/bin/bash
for i in $(ls -d /Users/*) doif [ -e $i/.tcshrc ] thenecho "$i/.tcshrc exists!"elseecho "$i/.tcshrc does not exist"