i have text file that filename contain the date of creation (i.e 2010.05.02.log).I would like to create a script that:-Ask for start date -Ask for end date- Concatenate all file on the requested period by date order.
I have two different table in a database in mysql that has no share key with each other and I want to sort them with each other on their date time column I mean when I sort it row X of table A that is older that row Y of table B comes earlier.
I would really like to preserve a file's original modified date and pass it back to the file as the same attribute after a script has worked on it. I get a lot of JPEG files from different places on the Net which I either turn around and upload or burn to disk, and having the "original" date of either download or last mod in a graphics app would be for me, in the long run, a lot more helpful when deciding, for instance, which files to "recycle" or pass on backing up more than once.I've tried doing this on my own every now and then. Where I run into problems is that it appears "stat" and "date" use different formats for date information, and I can't seem to puzzle out how to "translate" one to the other satisfactorily for the latter command.
Just to give an example: stat foo.jpg |grep Modify gives me Modify: 2010-07-12 06:28:56.890625000 -0400
Passing that string as-is to date foo.jpg, I get the errordate: unknown option -- 0 and the usual semi-courteous suggestion to Try 'date --help' for more information.Somehow my TexInfo database got screwed up somewhere along the line and info dategives me the short article on date input formats, not the full documentation for the command
I need a script that will take all the files in a given directory and create new monthly sub-directories and sort all the files based on the creation date into the appropriate directory.For example, all files created between 01/01/09 and 01/31/09 will be placed in 'JAN-2009'
I am supposed to take some small files, and print them to a specific printer, such that the small files are concatenated into one file. The file name has to be included in the file that gets printed.
Should I be looking to concatenate the files into one file with the file names included, and then print them?
I am new to Scripting. I am trying to find out particular file is modified in last one hour or not in script and then if that file is modified in last one hour i need to copy that file to another directory.Can any one please provide me how to check the file is modified in one hour or not?
Code: #!/usr/bin/python # -*- coding: iso-8859-1 -*- import re # @description "This is a describing text about the file currently documentet"; #DocC documentation prototype
I have a file like below. For all the lines (except for the ones listed as 'Unknown Owner' and N/A') I would like to change to lower case and concatenate the first and last names.Before:
Code: aaa.bbb.ccc.ddd,Unknown Owner ddd.eee.fff.ggg,N/A hhh.iii.jjj.kkk,John Doe aaa.bbb.ccc.ddd,Mary Jane
I have log files that everyday are downloaded from my webserver in the format: Code: samplesite.com.xxxxxxxxxxx.gz xxxxxxxxxx is a 10 digit epoch time. I am trying to figure out a way in batch to:
1. find all of exisiting files containing the pattern (after the first run it will only be one a day) 2. Isolate the epoch string 3. convert the epoch string to human readable date/time 4. rename the original file as samplesite.com.mmddYYYY.gz
I need this script but I don't know how to do it I have one folder with several folders inside.On each folder a have one MKV or AVI file inside...What I need is a script to change the "modification date" of each folder to the "modification date" of each MKV or AVI that the folder has inside.
I am using CRON to create a new, blank file, every minute, in a specific location on my web server. After web searching, and reading man pages, I get the impression that the following command is supposed to work:touch /home/mydomain/var/folder/attachments/`date +%H%M`.txtThis should give me a new file with a file name that is the current hour and minute.However, when executed, the CRON mailer reports:touch /home/mydomain/var/folder/attachments/`date +/bin/sh: -c: line 0: unexpected EOF while looking for matching /bin/sh: -c: line 1: syntax error: unexpected end of fileSo, it looks like shell is seeing the plus (+) sign as an EOFObviously, nothing get created.What would be the easiest, single line command to create an empty file, at a given location, with a time based file name
I'm new to UNIX scripting; I�m stuck with the following I have an Oracle SQL script that takes three parameters
1- File Name 2- File Path 3- File creation date
Under UNIX I have a folder where files will be placed frequently and I need to upload those files to Oracle, what I need is a UNIX script that can do the following
Loop through Directory "/home/applmgr/snktmp" Picks only files Pass the file name to parameter &1
[code]....
Is the above possible? I already knows how to call the Oracle Script from UNIX Im only stuck on writing the UNIX part where it List the files attribute(name,path,date) and store them to parameters ,Looping until the last file in the directory If the above is not possible,then how can I create the below from the command line
so I was wondering how I could do a simple find which would order the results by most recently modified. Here is the current fine I am using. (I am doing a shell escape in php, so that is the reasoning for the variables. find '$dir' -name '$str'* -print | head -10
How could I have this order the search by most recently modified. (Note I do not want it to sort 'after' the search, but rather find the results based on what was most recently modified)
I like ordering my images my date modified, but Eye of GNOME only lets you view them by alphabetical order.
Important features for me are: Going through items with the arrow keys. Zooming in and out with the mouse wheel. Being able to sort by modification date, type, or name. Being able to right click and open with either another window of the same viewer, or with another viewer. Having a simple interface
So far, I've tried:
Eye of GNOME - I love how simple it is, and if it wasn't for this sorting problem I'd keep using it. Well, that and the fact that you can't right click and open an image in a separate Eye of GNOME window while continuing to scroll images in the current window.
gThumb - Damn. So close to being a winner. Can't pass images with the arrow keys or zoom in and out with the mouse wheel, but I can sort by modification date, it's simple, and can open another window of the same viewer. But those first two points are also important for me.
Fspot - A little too cluttered when opening a single image. I don't really need to see a top panel with the other images, even if it's nice. I can go through images with the arrow keys, and zoom in and out, but no sorting by modification date.
Shotwell - Shotwell's viewer is pretty fast and simple, however it has lots of flaws for me: can't sort by modification date, can't zoom in and out with mouse wheel, can't open an image in another window while viewing it. At least it's simple and I can navigate with the arrow keys.
I've tried what feels like at least a dozen different image viewers and I'm still hunting for one that handles viewing by date taken. The trick is that I'd like to just click the jpeg as they come off the camera and then browse through all the photos in the order taken... without having to load them into a photo management application, sort them in their, then start viewing from there. Too many extra steps for too many pictures when you're looking at a collection that is already organized (using the file system, i.e., nested directories) back to 2000. I need (ideally) a viewer where I click the jpeg, then go next next next through them in the order in which they were taken. Does anyone know if this sort of thing exists?
Each line of the file I am sorting is in the following format:
<url> <month> <day>
For example:
[URL]
I wrote the following to sort:
Code:
#!/usr/bin/perl $in = shift; chomp($in);
[code]....
The script worked fine for my small testing files, but failed in my input file. The input file is 18MB and containing more than 300,000 lines. The output will contains some lines like that:
Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? What doesn't work: detaching from the screen session (doesn't respond until file is done outputting) trying command to move to a different window in the screen session (also doesn't respond) typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up).
I have a mess in my photos folder; I want to sort them according to date in EXIF information and rename according to the date (like 001.jpg, 002.jpg and so on).
How can I do this in Linux? I have used ImageMagick for some basic bulk processing tasks before (converting and resizing, etc), is it possible to use it for this task?
What options should I use when I'm using the sort command to sort the top 5 CPU processes (ps -eo user,pid,ppid,%cpu,%mem,fname | sort ??? | head -5) showing max to min usage?
I have very little linux experience. And need some help with a bash script. I need to a script I can set cron to run to sort files out of a holding folder into final folders. It doesn't necessarily have to be bash, but I think it would be sufficient for this. File names are formatted as such when created: Dest-Date-Time-CID-Destination# I want the files to be moved from a all in one holding folder to a folder structure like this.
So the script will need to make directories based on information in the file name which is delimited by single dashes. Then move files from the holding folder to the newly created "sorted" folders.
We switched from unix to linux and we have an old report that extracted data from a database, output to an ascii file and then sorted the results in the file based on different arguments. The report now blows up when it runs,and I can only guess it is because the options for sort on linux differ slightly from unix.For example, here is one of the commands issued from within the report app that ran on the old unix box:
I will eventually rewrite the report to store the data in a local table, but I can simply adjust the options to suit the requirments of linux. Basically, I need to know if this can be a quick fix for the short term.
I just switched from a basic digital camera to a more advanced one that stores both Jpeg and Raw (.Nef - it's a Nikon) files for me.When importing files in Digikam, I rename the files so that they start with Date and Time. Example: 20110121-223748.JPG for a photo taken on Jan 21st 2011 at 22:37:48.I was a bit surprised when importing both the JPEG and the Raw version of the same photo, that the filename is different by a few seconds (no constant offset, sometimes they are the same):
20110121-223748.JPG 20110121-223750.NEF
I did some "research" by looking at the exif data of both files (using "exiftool 20110121-223748.JPG" from the command line). Here is what I got back
(amongst other data):20110121-223748.JPG File Modification Date/Time : 2011:01:21 22:37:48+01:00 Modify Date : 2011:01:21 22:37:48 Date/Time Original : 2011:01:21 22:37:48
[code]....
So it seems that Digikam is using the "File Modification Date/Time" (different in the Jpeg's and Raw's of my camera) rather than the "Create Date" (the same for both Jpeg and Raw). (The few seconds difference in "File Modification Date/Time" between the two versions of the same photo is probably due to the time that my camera needs to write away the data on the SD memory card. I guess.) Is there a way to have Digikam use the Create Date? (Or the Date/Time Original?)
I know that ImageMagick's convert program can be used as follows to convert a collection of images -- say, in PNG format -- to a PDF file:
convert *png output.pdf
The problem with this is that each image is then stretched to fit on one page, whereas I would like to keep the original dimensions of the images and put as many as possible on one page in the PDF file before moving on to another page.
I need to get the modified date on a file in linux to use in a script.I tried using 'ls -l' on the file, but this caused problems when the date turned from a single digit into a double. The reason for the problem was because I was parsing the result string on spaces.How can I get the date of the last time a file was modified so I can use it in a script? For example, if a file was modified on 1/11/2010, I need the 11.