I am using CRON to create a new, blank file, every minute, in a specific location on my web server. After web searching, and reading man pages, I get the impression that the following command is supposed to work:touch /home/mydomain/var/folder/attachments/`date +%H%M`.txtThis should give me a new file with a file name that is the current hour and minute.However, when executed, the CRON mailer reports:touch /home/mydomain/var/folder/attachments/`date +/bin/sh: -c: line 0: unexpected EOF while looking for matching /bin/sh: -c: line 1: syntax error: unexpected end of fileSo, it looks like shell is seeing the plus (+) sign as an EOFObviously, nothing get created.What would be the easiest, single line command to create an empty file, at a given location, with a time based file name
Originally Posted by Kenny_StrawnPlease wrap [CODE] tags aroung any code posted here. The full source that way could still be posted.I am trying to copy all the files in the directory based on the modification date (i.e created on Dec 29). Not able to find the proper command for this. This is what I have tried.
I need a script that will take all the files in a given directory and create new monthly sub-directories and sort all the files based on the creation date into the appropriate directory.For example, all files created between 01/01/09 and 01/31/09 will be placed in 'JAN-2009'
### TO DO: Determine the report file name based on the source directory name and current date### The report name and thumbnail directory must follow this pattern: source-%j-%H### for example, for pictures in /home/you/pictures, the file name will be: pictures-%j-%H### HINT: Use sed to extract the directory name from the path and combine it with date command output
As I'm gonna transfer large amount of data folders from one hard drive to another, I wanna make sure that the transfer has not corrupted the data. how could I generate MD5SUMs of entire directory including sub directories, in a single file and later, how could I verify with the data I've just transferred.
i am in need of linux help. iam at college and i need this back/restore script to pass this final part of an assessment. i require a backup script that will not only backup but also restore files to the relevent directories. e.g. users are instructed to store all wordprocessor files in a directory named wp. so i am needing to create a backup directory and 3 directories within that and some files within the 3 directories and then back them up ot restore them. l know i should/have to do this myself by been trying to get/understand info for the last few days and came up with zero.
How to get the week number in linux using gawk with different first day of the week? the date command can give me the week number with +%V but it is based on Monday (1-53) or +%U (based on Sunday, 0-53).
I tried to to do this: date -d "ddmmyy+2days" +%V, but the result is not correct. I want the first day of the week is based on Saturday.
I know find can do what I am looking for, but I am wondering if there is an alternative way to find files on the filesystem either created before/after a certain point, or at a certain time.
Typically I rely on updatedb & locate for most of my file searching needs. Issues with those tools, though, are that it only has directory and file names, and it only creates a database of local directories, not anything mounted via CIFS|NFS or via -o loop (eg, .iso images).
So if I need to find files created after yesterday across the entire system (local and remote filesystems), I am currently needing to use find.
What other tools, if any, would accomplish this in a similar fashion?
I have tried ls and grep, but that requires (in my attempts so far) multiple searches:
ls -lR | grep Aug | grep 10 ls -lR | grep Aug | grep 11
I've spent ages trying to build this and had a good look around for a way to do it. I have a directory tree which contains a set of folders and files. Some of the folders contain more than one file but most contain only a single one. I'm trying to move all of the files which are on their own in directories one level below the root into the root. E.g:
Root is: /volume3 Single file in a sub folder: /volume3/20110103/20110103.log File should end up as: /volume3/20110103.log
I know how to flatten the entire structure fairly easily but its the conditional part which I can't figure out how to do.
I have tried to find the solution for my problem on this site and other sites but haven't found a good enough answer yet. Maybe some of you can help me out here?What i need is a script (bash preferrably) that can delete directories based on a date in its dirname.For example.I have a bunch of directories that is named
I'm not sure if this is possible or even where to start. I assume that this can be done with an sh script using tar or similar.I have several very large zip files that contain images for all of the products in my online store. Each image is named after its 13 digit SKU (for example, 9987788000012.jpg). In order to import products into my store, all images are placed into a media directory. Unfortunately, there are over 100,000 images.
So I would like to break the images into sub-folders based on file name. For example, when I extract store_images.zip (or tar or whatever), my extract script would create directories (if they don't already exist) based on the first three digits of each image name, placing each image into the appropriate bottom level directory. For example, "9987788000012.jpg" would be placed in the following directory "media/9/9/8", with media as the root and "8" as the directory that holds any images that start with "998". Perhaps two sub-folders would be less cumbersome.Assuming this requires a script, particularly since it involves scanning image names, creating folders, and saving images to specific directories, which language would serve my needs best? PHP? Has anyone had to do something similar?
Relational databases usually have their data over in /var/lib/something. Users are in /home (with data in /var/www). How can I apply a single total disk space quota across all of these independent software systems (file systems, RDBMS, etc.)?
P.S. There's a bet going on around me as to just how awesome SU is. Let's see what you've got.
Newbie 1st post here. Trying to find the most efficient way to copy a file to a different directory and rename it with a date stamp extension. Looking to accomplish this with one command if possible.
File = make_file Full path /home/user1/bin/scripts/make_file
would like to move to the following directory /home/user1/bin/scripts/archive/
I'm trying to find out how to use command substitution along with the date command that when I copy the file to the archive directory it gets renamed with a time stamp extension. It should look something like "make_page_12:00:00-24-10-2010" I've tried a few different combinations using the cp and mv commands but can't seem to get it to work the way I want to.
I have a very large directory with probably millions of small files in it. It's taking forever to run ls on the directory.
Is there an easy script that I can run to split the directory into smaller ones, based on the prefixes of the filenames. My goal is to wind up with something similar to what the Debian archives' pool directory looks like.
i'm trying to make a script that gives one output if a directory in /home is older than one month, and another if the directory is less than one month old. I looked around and saw that the creation date for directories isn't stored, or at least i couldn't find it? How is this possible to do then?
Recently i am trying to check on the rsync speed for single file(2.4GB iso) directory ( 900MB directory with files inside ) When i run the rsync for single file: the speed i get is average 50MBps However, for a directory: average speed is 10MBps Is there any reason behind this ? i tried to google but unable to get the concept.
I'm pretty new to bash scripting, but I really want to wrap my head around it.What I'm trying to do is: From directory "A": Go in to all subdirectories and rename all files within icrementally according to the directory name. SO:
I have hundreds of MTS and AVI files since 2000 and would like to rename them in the following manner based on the date created: DD-MMM-YYYY HH.MM.SS_X; where X begins at 1 and increments by 1 if there are dublicate date/time stamped videos.
Ex: 19-Nov-2002 08.12.30.avi, 19-Nov-2002 08:13:30_1 and 19-Nov-2002 08:13:30_2
Someone previously wrote the following script for me, and it works great for photos. It uses EXIV2 to get the image date created info. I have tried to understand the script, but am struggling. The video files I have can use the date modified since I have not modified them since I filmed them.
#!/usr/bin/env python import os import stat import pyexiv2 import time directory = '/home/david/Desktop/test' [Code].....
I'd like to copy a file, say widgets/water.txt, to all subfolders in the folder widgets using a single command. So if the folder widgets has 10 subfolders like widgets/blue, widgets/green, etc. I'd like to copy water.txt to all of them with one command.
I tried the commands
cp water.txt ./*/water.txt cp water.txt ./*/
However these don't seem to work. The latter gives 'cp: omitting directory' errors.
I am trying to write a very simple script that will go to every subdirectory of a single directory and run a command (lets call it make_ndx).I know I can write this the long way with in a text document with something like:
cd /"the directory"/"the 1st subdirectory" make_ndx cd .. cd "the 2nd subdirectory" cd ..
Alternatively, I also tried: for i in 'find /path/somemorepath -type d -mindepth 1'; do cd $i; make_ndx -f *.gro; done which returns me with the error cd: find: no such file or directory. But if I run the find command by itself to test if I am calling the right directories, it gives me the exactly the output I am looking for. Any ideas? Should I just write the find results to a file and loop through the contents of the file (which seems a little bit like overkill) or am I just making a simple typographical mistake and I am just not seeing it?
is there a way to install Ubuntu with up-to-date versions of all packages right away? To clarify: With the normal LiveCDs, in order to install an up-to-date Ubuntu Lucid, I have to download a 700 MB LiveCD, install Ubuntu, and then use the Update Manager (or apt-get) to upgrade all outdates packages, which by now should be another about 300 MB. Old versions of SUSE Linux had the option of downloading an ~40MB installer ISO which did not contain any packages itself, but would download and install the most recent versions of all necessary packages.
Is there such a facility for Ubuntu as well? Or a way of using an outdated Ubuntu LiveCD (e.g. Lucid Beta 1) to still install an up-to-date system in a single pass? I am *not* talking about netboot images such as netboot.me or boot.kernel.org, which AFAIK will download the full normal Ubuntu ISO during boot, so that I would still have to upgrade the system afterwards.
I have recently joined an 11.04 server to an AD and want to configure home directories based on group membership for all AD users that login. Basically, I want one home directory for "Domain Users" and another for "Domain Admins".
Sequentially number files based on date modified (rename cli)
I'm almost done a larger script which takes all the pictures in a folder, converts it to video, and emails it to me. Everything worked fine until I realized the picture filenames weren't always starting at 1, then ffmpeg chokes.
I have a bunch of files in a folder which I need to rename to:
I don't want to install any additional packages and I'd like this to run in a single command if possible.
If not possible, then a bash script would work too.