Programming :: Bash Script To Find Newest Files And Count Them?
Feb 25, 2010
I'm working on a bash script that will go through a directory, find the sub-directories that have been created since the last time the script ran, count the results, and output that integer (will most likely be '1' or less per each instance run) to a file. Give the circumstances, my previous (and very limited) experience with bash is not sufficient for me to pull this off. since it probably has bearing, is that my mail server stores files that it flags as viruses in a folder. It creates a sub-directory for each virus that it quarantines .I want to count those subdirectories and graph them with MRTG. Hence the script. I'm going to post what I've got so far and the purpose of it, because I'm told I have a very odd and efficient way of doing scripting.
[Code]...
But then it dawned on me that it wouldn't work because I would have to not count the directories that have already been counted and count the ones that have not been counted. Given that the purpose of this is to generate a graph about every 5 minutes, using find won't work because, to my knowledge, that will only find things based on whole day values, I need it almost down to the minute.
I have log files that should be parsed and then deleted by a script on a regular basis. Sometimes things don't work for a variety of reasons and the log files sit and sit and are never dealt with. What I need is a small script that can give me the files older than X days and a count of those files.
What I have so far helps me take care of things manually but I need a little automation in my life Here is what I have: I can count all the files in the necessary directories recursively with this: ls -laR | wc -l And I can find all the files that are older than 10 days that haven't been deleted yet by doing this: find /home/mike/logs -type f -mtime +10 But how do I put both of them into a script that will just give me the end number of both?
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
I am trying to find a nightly backup if it was successfully copied over, rename it and curl, but it's always passing the check even if the file is older than specified. From the command line it does as it should. Example is here;
Code: find /backup -type f -mmin +4440 -exec echo "found" {} ; - nothing returned (good). Then I change the time
Unfortunately, the second grep is greedy swallowing everything up to the last </ul> close tag. (The desired result is 2.) Speed is an issue as I will be searching through 350,000 files.
I would like to parse an input file in which there are two columns per each row. We want to see how many lines are duplicated where we define duplicate to be having the same second field and different first field. For instance if the input file looks like the following:
I want to remove duplicate or multiple similar lines from multiple files. I.e. if I have four files file1.txt file2.txt file3.txt and file4.txt and would like to find and remove similar lines from all these files keeping only one line from these similar lines. I only that uniq can be used to remove similar lines from a sorted file.
I need to count files in a dir which were updated yesterday.
ls -lth | grep -i 'Jul 7' | wc -l
The dir holds files of last 15 days and total count is as 2067476. Is it efficient to count the files using perl? I have developed the following perl script making use of system().
#!/usr/bin/perl use DBI; my ($db, $user, $pw) = ('dbname', '****', '***********'); my $dbh = DBI->connect("DBI:mysql:$db",$user,$pw) or die "Cannot connect to $db: $DBI::errstr
[code].....
The error message is
[Wed Feb 24 13:03:27 2010] myscript.cgi: DBD::mysql::st execute failed: Column count doesn't match value count at row 1 at myscript.cgi. [Wed Feb 24 13:03:27 2010] myscript.cgi: DBI::db=HASH(0x8a30c60)->errstr
I am new to perl scripting and wrote a perl script to read the directories and files and count the no of files in each directory and generate a log file. The problem is it is not printing anything to the log file. I am copying the script below.
Basically, I am trying to locate and copy the newest .json bookmark backup in my .mozilla/firefox/w987sdg9.default/bookmarkbackups directory.
I tried this
Code:
ls -t ~/.mozilla/firefox/b1ahb1ah.default/bookmarkbackups/ | head -1
which does return the newest file, but only the filename itself. I found readlink, but I haven't gotten that to output a full path which I can then feed to copy. So, it seems to me that find might work well here, and I know how to find based on absolute dates, but not relative.
Im looking for assistance to create a script to find and replace files.Probably best if I give you the background Our server uses a specific application which stores user data, each user data account (a folder on the server) has a file called 'Profile.xml' this file gets updated and replaced about every 30 mins similar to the fashion logrotate works i.e. Profile.xml.1 Profile.xml.2 -> .10
What we experience is that if the application crashes unexpectedly while it is doing its user profile refresh task we end up with sometimes a few hundred Profile.xml files which end up 0kb(should be around 4kb) , and our server see's these as corrupted profiles and will not see them. Our fix is to go back thru and rename the Profile.xml.1 to be Profile.xml (or sometimes up to Profile.xml.5 to Profile.xml) We want a script we can manually run to automate this process The server tree is
I need to rename the resulted searched files from a loopI have the following code:
find . -name DOC* | while read i do find $i -type f -name '*.txt' done
basically, I am searching for all txt files inside any folder starting with DOC name.this code is working fine with me.I need to rename those .txt files to .txtOLDOS: Ubuntu 10.4Bash shell
I am currently writing some convenience methods for my terminal in my bash_profile and am sure if what I am writing is "the best way". I figure a good way to verify whether what I'm doing is right or not would be to find some source code of more established programs and see how they do it.My question then is, where can I find this code on my Mac? An example is, with Macports installed, where is the source code that opens the port interactive console when I type nothing but "port" in my shell?(I added Linux in the title even though I am on a Mac because I assume the answer would be the same for both)
What kind of method to find the duplicates files on linux,
1.how to find just using the file name, sometimes i figure out people often to copy their files to another directory and i want to find out if there any same file name in the linux box.
2. what about if i want to find the duplicate files based on contents of the file, example is in picture file if users store picture files from digital camera first they just save the file name in default but when they want to give that picture to others they will rename it, i've been used method md5 for this situation in python script but it takes long time
I'm asking this question just to know to use bash script a lot in work and i want to test out fdupes at home, is fdupes use similar md5 scan to find duplicate files?
i had a problem with the find command in bash (which i deem is close enough to a promming language, if not please move this thread :P). i tried to reduce the command to the problem. i want the backticks, or $() for that matter; to be evaluated by -exec of find, not by bash. is that a caveat of find?
Code:
$ find testd -exec echo `basename {}` ; #confused me test test/a test/b
[code]...
edit: i found out whats causing this. `basename {}` gets evaluated by bash before find is invoked, returns {} and `find . -exec echo {} ;" is run. now my question is, how to escape this eveluation from happening before.
I am running 9.04 and would like to get the newest version of Songbird. I don't want to upgrade just yet it that is needed so are there are other ways of doing so? If no one knows how to find .deb packages could they let me know how i could install the .tar.gz format offered on the website?
write such script (bash script). I have some text file with name filename.txt I must check if this file contains string "test-string-first", I must cut from this file string which follows string "keyword-string:" and till first white-space and save it to some variable.
For example. File: PHP Code: PHP Code: Start 15022011 Eng 12-3-42 SN1232324422 11 test-string-first SN322211 securities HH keyword-string:123456321-net mark (11-22)
This may be a basic bash array/string operation related question, but I couldn't find any direct answer. So here it goes:I have a lot of data sorted in various directories. All directories need same processing except for a special group of directories. I have a symbolic link of the script in discussion in each directory. I want the script to get the name of the current directory, check if that belongs to special group and do specific operations.So I get the name of the directory
Code: mm=`basename `pwd`` Now the the group of directories that needs something different to be done, contains these
I'm relatively experienced with UNIX and Linux, but this has me thrown for quite a loop, and it seemed like such a simple question. How would I go about finding the newest file in a file system? I thought something like:
Code:
ls -ltr `find /usr -type f`
would work, but I seem to be exceeding the argument maximum for ls:
ksh: 0403-029 There is not enough memory available now
I thought something involving xargs might work, but I really suck with that command.
The problem I have is that I need to replace a more complex string, like this: Old string: /mnt/stor6-wc2-dfw1/627896/982574/ New string: /mnt/stor8-wc2-dfw1/369587/302589/ There I don't know how to do it... since the / is what separates the old from the new strings, and the strings that I want to replace have / in it. Also, I would like to know how to specify under what folder replace the files, for example, I want that it search/replaces all files under /var/www/mysite/htdocs folder.