General :: Copy Wildcard Files From Directories For Matching
Feb 11, 2011
I have 2 massive duplicate dirs of the same format as below:
dir1
subdir1
file1
subdir2
file1
subdir3
file1
...
Dir2 is the same, but it has some newer files of the same name. I want to copy all file1's from Dir2 to the same name and folders in dir1. So basically something like:
cp -pr bkpDir1/*/*-big.gif Dir2/*/*-big.gif
This works for singular cases:
cp -pr bkpDir1/uniquesubdir/*-big.gif Dir2/uniquesubdir/*-big.gif
But not for wildcards:
cp -pr bkpDir1/subdir*/*-big.gif Dir2/subdir*/*-big.gif
Anyway the aim is to do the first cp above, I have tried a few options using find. In trying to show an example stumbled upon a way that worked, while in dir2:
find */*-big.gif | xargs -i cp -rp {} ../dir1/{}
Sure there are better ways also...
I have a question which has been in part answered many times but nothing I found relateds completely to my situation. I am sure there will be people who will say RTFM but believe me I did, and searched as well but to no avail. I have a situation where I want to copy files created withing last hour in one directory into another one. The problem is that that the directories are on different levels in the dir tree so the absolute path is different. But I want to keep the relative path the same.
I want to copy new files from /mnt/path_to_webdav/user to /home/user. so if there is new file /mnt/path_to_webdav/user/doc/xy.txt I want it to be copied to /home/user/doc/xy.txt. Also if there is a new dir, say /mnt/path_to_webdav/user/newdir I want a new dir to be created in /home/user/newdir with all the files in it, should there be any. I can do find with exec and copy all the files into one directory.This is not what I want though. How do I preserve the relative path and get the files copied into their corresponding directories?
I have been searching for a solution to the following problem:
When my distro of choice updates Firefox web browser, the directory name is '/usr/lib/firefox-<version>'. The problem here is that the directory name is dynamic by nature and doesn't allow a simple static solution, e.g. 'cp -rf /usr/local/files/bookmarks.html /usr/lib/firefox/defaults/profile'.
The same quandary applies when adding extensions, changing prefs etc. I have looked at the following commands:- find, sed, xargs, grep, awk, fprint. Unfortunately my grasp of syntax and programming is very simple at best.
i am in need of linux help. iam at college and i need this back/restore script to pass this final part of an assessment. i require a backup script that will not only backup but also restore files to the relevent directories. e.g. users are instructed to store all wordprocessor files in a directory named wp. so i am needing to create a backup directory and 3 directories within that and some files within the 3 directories and then back them up ot restore them. l know i should/have to do this myself by been trying to get/understand info for the last few days and came up with zero.
I am writing a script, in that my requirement is, if all the fill types stored in one directory from that we need to separate different different directories based on the file types.
for example in a directory(anish). 5 different types files 1- directory 2- .txt files 2- .sh files
like that and my requirement is the (1- directory is moved to one new directory(dir) which we are given in the script)and (2 .txt files are moved to another new directory(test) which we are given in the script)and ( 2 .sh files are moved to another new directory(bash) which we are given in the scrip)finally the directory anish should be empty..using bash script.how it is possible !!
I have a requirement to list files using find command My folder contains below list of files with out extention.I have a requirement to exclude only ABC.123.* type files and list others. Even though files having MNO contains this pattern i should not exclude. Even if file ends with .txt or .doc it should not be excluded. That is ABC.123.1234.txt should not be excluded.But I am not getting what is required. Can any one please let me know if I am doing wrong any where. As per my requirement I cannot use grep, -regex, or -regex attributes to find command.
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
I am writing a shell script that finds all files named <myFile> in a directory <dir> or any of its subdirectories, recursively. I also need to take care of symbolic links that may form cycles, to avoid infinite loops. I am not supposed to use find command for the same
I started writing the code but got stuck. I thought using recursion may be a smart way, but its not working.
What is the best and simplest way to compare two directory structures without actually comparing the data in files. This works fine: diff -qr dir1 dir2 But it's really slow because it's comparing files too. Is there a switch for diff or another simple cli tool to do this?
From this directory, I want to know how I could use grep to display files based on part of their filename - for example those starting with "Account" or those ending in ".sh".
I am attempting to copy a set of sub folders from their multiple parent directories to a new location.
For example, I have three folders to copy:
I would like them to be copied to:
In actuality there are many folders besides folder1, folder2, folder3, and no numerical order exists. So, the folder named 'photos' would be copied to its parent folder's name in a new location. I would need this to occur for all folders in the '/home/user' directory.
I want to copy all directories, files, and hidden files and hidden directories with one command. I want these items to replace any same items in the target directory.
I have tried several things, such as:
cp -r * cp -aR *
but I only seem to get visible files and directories. Obviously, I am missing something. (A brain, probably....)
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
passing a wild card array to cp, I know it can be done but cant figure it out, ive got 3 files all start with the word somefile but have 3 different extensions
somefile.conf somefile.dat somefile.py
there in a folder with a bunch of other files and I just want to copy them to a different location. I remember is being something like cp somefile.[py conf dat] /somewhere but this isn't working Ive searched the net and can't find it with the keywords i'm using.
Can I dereference a wildcard in a command?For example if I want to create a file with the md5 hashes of compressed versions of files in a directory...gzip -rc ./source/* | md5sum -b - >> hash.txtThis above command gives me a file with one hash for the filename *I would like to get a file with the hashes and filenames for every file gzip compresses.
I would like help with modifying the following content:
toolbox/perl/man/man3/ExtUtils::Command.3::Command.3 differ toolbox/perl/man/man3/ExtUtils::Command::MM.3::Command::MM.3 differ I would like the content to be changed to: toolbox/perl/man/man3/ExtUtils::Command.3 toolbox/perl/man/man3/ExtUtils::Command::MM.3
I was not sure how to tell sed what to look for? I tried the following but it did not work. sed -i 's/::* differ//g' mandiff.log
I'm totally new to Linux and this website. I was wondering if anyone had or could help me create a shell script that would merge two files from two different directories and then have that new merged file in a third differnt directory.The merged file would need to eliminate duplicates and sort the contents.
I'd like to remove all directories of a certain depth that don't contain .txt or .log files -- is this possible? So far I have: find ~ -mindepth 3 -maxdepth 4 -type d -exec rm -r '{}' ; Is it possible to add in "only if the directory doesn't contain .txt and/or .log files"? Or do I have to start learning perl to do that?
For example: dir 1: hello.txt runme.sh dir 2: runme.sh oct12.log [Code]....
I have two table files with x (1st column) ,y (2nd column) coordinates and intensity (3rd column). I need to match these two tables and divide the intensities at the consecutive coordinates on the 3rd column. The problem is the size of the tables are not same and I want to ignore the lines if they are not in one of the other file.
If I execute the following command: cp -R /myfiles /mydestination
If myfiles contains several sub-directories and files, in what order will they be copied? For example, directories might be named 0123a, 9993c, myfolder, xfolder.
They are not copied in alphabetical order OR in date order OR in the order they appear when using a standard ls command as far as I can tell, so what actually does determine the order?
Edit: I am trying to determine the order that the cp command uses in order to determine how far along my copy command made it before it stopped. For example, I was hoping to be able to determine it copied 3 of the 4 directories successfully.
The rm command man pages discusses removing files or directories recursively. So what is meant by deleting a file or directory recursively? And what are some reasons for doing so?
I then installed a new version of Ubuntu 10.04 from disk and copied the files in /home from the cd to the hard. I am able to open, view etc. all the files in most directories except those in /home/documents. There are text files created by gedit, OOWP and several PDF files. I cannot open or view these files, depending: gedit and pdf files gets a Err.Msg. "Don't recognize file type" (it is clearly marked PDF) . The OO files look like rows of 'high bits' and a dialogue box opens giving me the options to change Char. Set, Font, Language, Paragraph break.
I have 60+ directory's each containing multiple .doc files. I need to move them to a single directory and keep their file name intact. I don't think cp will do that with out listing all the file names. I was thinking of something like: cp -r /dir/*.doc /newdir . Or should I use a combo like find -type *.doc|cp /newdir?