I need to count files in a dir which were updated yesterday.
ls -lth | grep -i 'Jul 7' | wc -l
The dir holds files of last 15 days and total count is as 2067476. Is it efficient to count the files using perl? I have developed the following perl script making use of system().
I am new to perl scripting and wrote a perl script to read the directories and files and count the no of files in each directory and generate a log file. The problem is it is not printing anything to the log file. I am copying the script below.
I have log files that should be parsed and then deleted by a script on a regular basis. Sometimes things don't work for a variety of reasons and the log files sit and sit and are never dealt with. What I need is a small script that can give me the files older than X days and a count of those files.
What I have so far helps me take care of things manually but I need a little automation in my life Here is what I have: I can count all the files in the necessary directories recursively with this: ls -laR | wc -l And I can find all the files that are older than 10 days that haven't been deleted yet by doing this: find /home/mike/logs -type f -mtime +10 But how do I put both of them into a script that will just give me the end number of both?
#!/usr/bin/perl use DBI; my ($db, $user, $pw) = ('dbname', '****', '***********'); my $dbh = DBI->connect("DBI:mysql:$db",$user,$pw) or die "Cannot connect to $db: $DBI::errstr
[code].....
The error message is
[Wed Feb 24 13:03:27 2010] myscript.cgi: DBD::mysql::st execute failed: Column count doesn't match value count at row 1 at myscript.cgi. [Wed Feb 24 13:03:27 2010] myscript.cgi: DBI::db=HASH(0x8a30c60)->errstr
I'm working on a bash script that will go through a directory, find the sub-directories that have been created since the last time the script ran, count the results, and output that integer (will most likely be '1' or less per each instance run) to a file. Give the circumstances, my previous (and very limited) experience with bash is not sufficient for me to pull this off. since it probably has bearing, is that my mail server stores files that it flags as viruses in a folder. It creates a sub-directory for each virus that it quarantines .I want to count those subdirectories and graph them with MRTG. Hence the script. I'm going to post what I've got so far and the purpose of it, because I'm told I have a very odd and efficient way of doing scripting.
[Code]...
But then it dawned on me that it wouldn't work because I would have to not count the directories that have already been counted and count the ones that have not been counted. Given that the purpose of this is to generate a graph about every 5 minutes, using find won't work because, to my knowledge, that will only find things based on whole day values, I need it almost down to the minute.
I am looking for some suggestions if possible, regarding processing the files using perl script. Scenario is I have a location where new files will be added always. I need to process these files for some validation. I wrote a perl script to do this and I thought I can rename the files once they are processed in that way I dont process the same files again. But now I can't rename the files due to some restrictions. Second thought, to process them based on date stamp but as my perlscript is being automated and runs every one hour to process the files I can't go by date stamp.
I want to compare the following two tab-delimited .txt files (both were subsets of the original files) by comparing Columns 3 and 4 simultaneously. It is easy to compare C3 because both C3s are just numbers. But how to compare C4s?Basically, in File1, "G,G" = G in File2, "C,C" = C in File2, "A,A" = A in File2, "T,T"= T in File2.In File2, A/T in Column4 just equals "A,T" or "T,A" in Column4 of File1. C/T in Column4 just equals "C,T" or "T,C" in Column4 of File1, and etc.
I need write a script that can compare multiple input files and output a file. The basic idea is:1: All my input files are in the same format2: I want to find in-common lines (in-common 1) from some of my input files (e.g., input1, 2 an 3), and find in-common lines (in-common 2) from the rest of my input files (e.g., input 4,5,6,7). And then, compare in-common1 and in-common2 and remove any overlap from in-common 1.3: Output the remaining in-common 1 file after removing any of its overlap with in-common 2I know how to write this script by putting all the filenames in one script and compare them. But the thing is, if I have more input files, such as 100, it might not be that efficient to write all filenames in one script and compare them.
I am wondering if there is any way to do such as:1: put all input filenames in a text file (file1)2: write a script3: Everytime, when I run this script, it will read in file1 directly no matter how many input files I have, give an output.I want this because I will have more and more input files and I don't wanna add multiple lines in the script just for reading the new inputfiles and compare them with the previous files. So, I guess this is something related to making my script a package or standardize it and make it easy to use in the future no matter how many input files I will have.
what I got - from a crontab run a script (understand that part), this script needs to count the amount of files in /outgoing/, then take 30 less that number, and move that many files from /readycalls/. I need to keep the asterisk outgoing que full of .call files with out having to many in there at any given time.
If I pass in /home, I would like for it to return 4 files. Or, bonus points if it returns 4 files, 2 directories. Basically, I want the equivalent of right-clicking a folder on Windows and selecting properties and seeing how many files/folders are contained in that folder.
How can I most easily do this? I have a solution involving a Python script I wrote, but why isn't this as easy as running ls | wc or similar?
I have written a code on Linux that searches a long dictionary. I have used hsearch() function but the problem is it does not work. This is my code://Search the count values from the dictionary.
I open each DIC file, get the word from it and search the hash table and extract the key from it. The problem with the above code is that it is able to make the hash table but it returns NULL when searching. It should not return NULL in any case because all words from DIC files are there in the dictionary. I am not able to figure out why?
And I'm trying to count the number of slashes in each line. I figured (with my limited knowledge of bash) that the best thing to use would be sed. So I ran this to print "not /": sed '!s////g' file # and eventually adding " | wc -m" to it. and I got the same result as if I ran cat, no modification at all:
Unfortunately, the second grep is greedy swallowing everything up to the last </ul> close tag. (The desired result is 2.) Speed is an issue as I will be searching through 350,000 files.
I am trying to count no. of characters in a word but it is coming one more than what it actually should be.
Code:
I can have a work around by subtracting 1 from the output (6-1=5 in this case). BUT, I am just curious to know, why the character count is coming as 6 and not 5.
What I want to do is from a file having block like
<event> 8 3 0.2685416E-02 2 -1 0 21 -1 0
[code]...
The first line after the "<event>" is its process-id, so I would like to have at the end a summary of how many "event" block I have for each type, ie how many
6 1 0.2685416E-02
or how many
7 2 0.2685416E-02
etc etc
I do not know in advance how many different-kind of block I will have, so it has to be a bit smart to scan the file, and make an new "summary" info for each unique type I was using something like
I need something to make a script that will search some logs and extract IP hits from one country only. Let's say UK. I guess I need to use GeoIP or some database. I just need a very simple bash, perl, php script that will do this job. Just search threw logs (apache) and then give me number of hits found from UK.
This has to also show the line count. I can get it to show the files but not the line count. What is the single command used to identify only the matching count of all lines within files under the /etc directory that contain the word „HOST? List only the files with matches and suppress any error messages.