Programming :: Shell Script To Monitor Files Created On A Folder
Aug 27, 2009
Can someone please help me on how can i create a script that will monitor file creation on a single folder and sending the newly created file on a separate folder? Only the new created file must be transffered or copied to the other folder. The old ones remains.I urgently need this for production deployment.
My program need to monitor the foler to know which file under the folder is being opened/created for writing. I add the folder into watch list using inotify_add_watch, when a file -- say 'AA' -- is created, I'll get the event through read api call. But the inotify_event only have file name 'AA' and a event mask. these parameters can't help me to know how the 'AA' is created/openned. So I have to scan the /proc folder to get to know how is 'AA' created/openned. I don't think this is a efficient way, especially if there are lots of files are openned/created in a short time span.
I created a little bash script for renaming files from a folderEvery time i hv to put that bash script file (rename.sh) in folder Is there any ways i will call (rename.sh) from terminal without moving rename.sh into any folder ?ne More Question : Whenever i run any .sh file automatically one .sh~ file created it is my programing mistake or is it exists ?
I'm dipping my toes into some bash scripting and was wondering if there was a way to delete a file not based on how old it is, but rather how many other files are currently in the folder... or something to that effect....
What I'm doing is creating a script to back up a folder nightly. I'd like to keep a maximum of 3 backups. However in case the script for some reason fails to run one night (computer turned off possibly) I don't want to set the condition for deletion to be the date.
I know that if I run:
Code: find /path/to/files* -mtime +3 -exec rm {} ; that it will delete everything older than three days. -atime and -ctime don't seem to be what I"m looking for... is there another command I can use to achieve what I"m trying to?
i have created an folder inside my redhat server. and i shared it via samba and mapped that shared folders inside 5 windows machine. now the problem i am facing is, if any one create a file in that mapped drive the other user cant edit that same file. but he can read it. only for files not folders.
i gave full permission to that folders and subfolders and in smb file i gave readable writable browsable permissions. and i disabled se linux and firewall
I have an Ubuntu server in which a file is dumped every hour and a new file for the next hour and the process continues. If there is any problem due to which the creation of file stops then empty files are created every minute till the process is killed & started again. I need help to make a shell script to check if the empty files are being created and then kill the process and start it again.It would be a great help if anyone can help me regarding this.
Kernel 2.6.21.5, GNU (Slackware 12.0). Bash 3.1.17.
I want to search an entire subtree of /, in the file system, for all files, with extension html, created on the hard disk. In addition, these have to be the last five created. I think I could split the problem into two parts: (a) Forget about the last condition. Then this is a job for the find command. (b) Sort the output of find using the date as the key, then use 'head' to print the desired output. But even two such simple steps are enough to justify the writing of a shell script. And here lies my weakness.
My script writing knowledge is rudimentary. What's the final purpose? Well, I lately saved four or five LQ pages onto disk containing information I consider valuable to me. But I don't exactly remember where on the disk. Then: either the problem posed is really of a very simple nature or it is not, in the latter case a script being mandatory. One of the algorithm drawbacks (the one described above) is that find may be running a great deal of time. My machine resources (RAM and CPU speed are low) are scarce and there possible are a large number of HTML files on the disk.
I have this nasty habit of refreshing desktop in a quick succession by right-clicking and selecting 'Refresh',on my XP system at office.(And,iam sure most of us do the same).With Ubuntu,if a right-click on desktop slowly and select 'align by...',it simulates the XP refresh action as explained above.But,if i perform the same action rapidly,it takes this first option from right-click context menu,which is 'Create Folder',and results in an empty folder being created on desktop.I tried double right-clicking and again it created an empty folder.Is there any workaround to handle this.I mean:Can the right-click context menu items be shuffled so that the 'Create Folder' option is moved from 1st place
there is a folder. Its empty. When every I drag a new file and put into it it echo out "there is file in there" and keep monitoring the folder. How can I do it?
Using C++, I want to process sub-folders on my home folder sequentially each with a special naming format and containing some binary files in it:
Code: 1/ 2/ 3/ 4/ 5/ 6/ ...
Give above folders, I will process files in 1/ at first, 2/ at second, 3/ at third, and so on.
For some n/ folder, if I realize that n/ actually does not exist in local file system, I do not want to wait for it. Hence I will keep processing (n+1)/ folder, and so on.
However, when processing some (n+m)/ folder, previously not processed n/ folder may have been created on local file system. In this case, I do not want to miss processing it, but somehow detect its creation and process it. After processing n/ folder, I want to continue from (n+m+1)/.
is there a recursive shell or Perl script to delete files with the same name as the parent folder? i wish to include the starting folder name as argument to the script.
I need to write a shell script which can ready content of the folder and place files on remote FTP server. I need to make sure that a file that is already placed on remote FTP server is not attempted second time. The file names will be something like Records-2011-05-09. The files will be generated by MySQL every hour.
I continue to work on automating the update and deployment of a vendors WAR files, and have bumped into my next challenge... The vendor provides web.xml files have entries that look like this
I need to search the file for a param-name and replace the param-value below it with the correct value. I expect sed or awk is the trick on this, but I am not sure how to have it search for one line, and have it update the line below it.
I was created one folder in linux with current time was 1978(For example). I was moved this folder to usb(FAT32 file format).While seeing this folder in window its not showing the folder time created time stamp, because the USB file system only support the year after 1980 . But again i am putting the same folder in linux ,its showing the correct time stamp.How is it possible? Because FAT32 only supports timestamp after 1980, but still its showing 1978 in linux system
I want to compare the following two tab-delimited .txt files (both were subsets of the original files) by comparing Columns 3 and 4 simultaneously. It is easy to compare C3 because both C3s are just numbers. But how to compare C4s?Basically, in File1, "G,G" = G in File2, "C,C" = C in File2, "A,A" = A in File2, "T,T"= T in File2.In File2, A/T in Column4 just equals "A,T" or "T,A" in Column4 of File1. C/T in Column4 just equals "C,T" or "T,C" in Column4 of File1, and etc.
Perl has the concept of pseudo virtual files where data can be appended to the end of the script, & Perl will treat it as a legitimate source of data:
Code: $ cat pseudo_file.pl #!/usr/bin/env perl use strict; while (<DATA>) { print; }
__DATA__ 1 2 3 $ perl pseudo_file.pl 1 2 3 $ _
I would really like to be able to do something similar in a Bourne shell script where I don't have to manage a collateral text file which is prone to getting lost. Does anyone know if there is a similar construct in shell programming? I would have thought this feature was a remnant that Perl inherited from its shell programming roots, but searching on the Web has revealed nothing of consequence.
I have a database with x number of files (192 at the moment, but will vary from time to time). I am going to copy these files to another location on the same server thorugh shell script. Problem with total size of 192 files is approx 900 GB (again this will vary from time to time).
My shell script should calculate the free space available at present in the server on each of the mount point (can be filled till it reaches 95%). Always 5% free space should be available free for future growth.
After calculating, it should prepare another flat file with following details:
the script should take as input in the begginig the username of the user and then deletes all the files and folders from the user in every place he has them. script must also check if the parameters have been given correctly (only one and that one must be a username) Doesnt all the files of a user exist on a folder with his name? what if i delete this folder? Will something like this work?
Quote:
E_NOARGS=65 if [ -z "$1" ] # Exit if no argument given. then echo "Usage: `basename $0` directory-to-copy-to"
I'm pretty sure I'm in over my head with this one. Here's the situation: My practice has been, after downloading and making changes to files, to use a certain GUI cataloging app to move them to categorized sub-folders inside one "umbrella" folder, then copy them to (what are in most cases) identically-named sub-folders in another one on the same HD. Emulation of this process on the command-line would look like:
This was in anticipation of making a "twin" of the second, now bigger, "umbrella" folder on anexternal HD and continuing the practice. At some point I intended to get rid of the original 2nd "parent folder" and just keep the first one on the drive with my OS install, using the twin on the backup folder as I'd been doing when it was on that same drive.
I'm very close (a matter of 2 to 3 weeks, it looks like) to getting a backup/external drive in a reasonably-reliable external enclosure. With the backup, originally the "twin" of the bigger "umbrella" folder and its sub-directories, on that other drive, I also anticipate that I will likely go at least a few days between backing up any new files to it.
For the sake of argument, let's say I gave the external the name "tuxs_twin". Is there any way to monitor copying activity from /Pictures/ on the boot HD to /media/tuxs_twin/Pictures/ and log the dates and times, then have something running that checks the log every so many hours and puts up a reminder dialog saying /tuxs_twin/Pictures/ hasn't had any new files copied to it since date X at time Y? I hope this was clear enough. If not, I'll try to break it down further when I know which details are confusing folks.
I have a lot of pdf files and I want to convert them to a lower quality for the web. I tried to use the following command (using ghostscript): Code: gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/default -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf Is there a way to make a batch to do this for all pdf-s in a folder?
I am facing a problem in Windows due to a virus called Newfolder.exe which creats files with the same name as it's parent directory and an extension .exe and this happens for every directory in the entire hierarchy in the infected pen drive. The antivirus detects them, but is sucking slow. So I thought this is a good opportunity to use the concepts of the all mighty shell script to remove those as they follow the same pattern. Say my complete path is
Code:
/home/pkd/fol1/
The virus would have created an file with complete paths
Quote:
/home/pkd/fol1.exe
If fol1 has two more directories fol11 and fol12 Then there would be two more .exe(virus created) in the following path
I have 2 external hdd in wich I have all my files. yesterday, I have copied all the files from hdd2 to hdd1 and I want to eliminate duplicates so I used FSLint to find them,now I want to make a shell script to delete all the files/entries (read from the log file) that begin with.
I have one directory with 3 level sub-directories, and about houndard files under those directories. I need a shell script to rename all patern mateched directories and files.
For example: the patern is AA in the directory or file name.