I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
dir1subdir1subdir2etc.and at the lowest level they contain all of these jpegs that I need. The problem is that I only need some of them. They're named like this:
pic1.jpg pic1_med.jpg pic1_small.jpg pic2.jpg pic2_med.jpg etc.
I want to just grab the ones without the size suffix and copy them all to another set of folders, while preserving the directory structure. The numbering all starts at 1 for each low level subdirectory, so I think that the directory structure is the only way to not get them mixed up.
I know that cp has a recursive option -r but how do I just extract the ones without the underscore? And then how do I preserve the directory structure when I move them over?
I want to copy all files with the name XYZ* into one folder. The problem is that the files are in different subfolders and that not even the depth of the folder structure is the same for all files. Luckily, at least each file has a unique name.
Of course, I thought about the cp command but I guess the depth of the folder structure needs to be the same for this to work.
I moved to Mac OS X recently and bumped into the "feature" of Mac where copying files from an external drive resets the file modification/update date/timestamp to the current date (which Windows does not), causing a disaster for my 10+ years of backup work files where date is important. So, before I learned how to avoid that (e.g. using the -p "preserve" flag in the "cp" copy command) I have in the meantime added to my new Mac hard drive many more files as well as updating existing old files.
I have a backup external hard drive with all my old data and proper modification dates. I have a Mac hard drive with reset modification file dates (a single or two particular days). The Mac hard drive has all the "true" and "current" file contents with files modified and added. I need to Copy all the original files from the external harddrive, preserving file metadata (really only modified date), but ONLY overriding the new internal Mac hard drive IF
The file contents (md5 or whatever) is the same or The file was updated after the day (which of course I can see on all files) on which the original disasterous cope was performed (implying the file is new or modified) Ensure the copy leaves all the new and modified files completely intact on the Mac internal hard drive. "No prompting/stopping of the copy of any kind (i.e., not verbose) is required but is o.k". "Recursive copy - obviously I would like to copy all* files folders and subfolders found in export".
I have a system where the permissions of many files are messed up. I have another system that has the same files, if I put that hard drive in, without simply overwriting the files, is there a way where I can recursively set the permissions of each file to that of this other directory?
I would like to find the command that copy my eclipse options to another workspace code...
It doesn't work, and it could be source of error to write the path .metadata/.plugins manually. It certainly a better idea to create a complete script ?
I had a situation in which the the path of the file to be copied is written in other file and I had to copy it using shell script..I can use cp $(cat /home/robert/location.txt) /media/sda1 on normal linux shell...But I am using buildroot script where $(cat /home/robert/location.txt) evaluate to nothing..is just blank..
I'm facing a little trouble with copying a .txt file(only) from a directory and subdirectory to another directory. -R command don't work I think if I want to do this, since I don't want to copy subdirectory.
I am trying to create a simple bash script to rsync some folders within a directory stucture. I am using wild cards, in the rsync source directory structure, but my command always fails. I believe it is the way I am using wild cards within my for loop. Here is my command ;
Code:
for seq in `cat test.txt` ; do rsync -nvP /folder/folder/folder/folder/folder/**/$seq /folder/folder/folder/ ; done This always fails, where if I do a ls to the destination, to test the path, it always works.
jump into a Linux class in college with only 3 weeks left in the course. I thought I would be able to catch on, and go figure, it didn't exactly happen that way. I was given an assignment to do, and I am so far lost it isn't even funny. I need to create a directory structure, set up file security, create a step by step instruction manual on how to copy/delete said files, and create a guide to common Linux commands. How would I create these files in root and share them with the other users? and where can I find a list of common commands and their functions?
I'm working with a dual-boot laptop running Ubuntu 10.0/Windows 7 and a Debian 5 VPS while the OS's shouldn't have much impact on my question.
What I would like to do is create a html page that I can upload to my VPS which lists all of the files/folders on my local 2TB hard drive (Specifically media such as Movies, Music, TV Shows...). The media obviously will not reside on the server, but I would like to at least have a list which will allow me to select, for instance, a bands artist so that it redirects me to the albums in the directory below.
Ultimately, I'm looking for Open Directory Browsing without actually having the media on my server. I have been attempting to create something to this effect using lynx, however, I'm not sure if it can be done with this command or if it's even possible for that matter.
I recently replaced my windows fileserver with one running Ubuntu. One thing I've noticed (which is a annoying) is that when I copy files between two samba shares from my windows machine, it copies the file through my PC to the new destination. On windows shares it just did some sort of local copy (ie it took about 2 seconds) rather than 3-4 minutes. Is this the normal behaviour, is there any way around it on Linux
i am running Ubuntu Lucid x64 as a fileserver that shares its files via SFTP, NFS and Samba. Currently the hard disks are configured to go to standby if they are not needed. This works perfectly as long as no one browses the shares or my HTPC is running: That one repeatedly looks through the shares for new music or movies. In other words my problem is that the disks are spinning up a lot more often than they should have to. Additionally the spin-up time delays the response time while browsing. Since the machine has a lot of unused RAM i want to tell the kernel that it should keep the directory structure in memory. That way the disks would not need to spin up every time someone browses through the directories.
I have recently purchased an external hard drive in order to backup my home partition. In my PC I have a "1.5T" drive with several partitions on it, containing OSes and the home partition. The home partition is 1.3T according to df, the external drive contains one partition that spans the entire disk,df reports it as 1.4T in size. Both partitions are ext3. When I use rsync to copy files from the home partition to the external partition, the external disk becomes full, despite the destination - supposedly - being larger than the source. I don't understand why copying files from one partition to a slightly bigger partition should need more space than on the source partition. Does anyone know what is happening ?
Details : I created the partition on the external drive with gparted; gparted reported it the already have several gigabytes in used space immediately after the partitions creation - I thought at the time that this must be normal. The home partition contains many files of all sorts, including lots of big audio and video files. If you are wondering, for all my important files this external disk is only secondary backup, as they are also backed up to the "internet".
These are the mount points :
/mnt/tmp/ : home partition, /dev/sdb6 /mnt/external/ : external partition, /dev/sdc1
I used rsync to copy the files, I know there are more efficient ways to do this, but I wanted to use the same command that I will subsequently run to sync the backup.
Next I tried adding the --sparse switch, as I was wondering if the problem may come form sparse files. I don't know however if rsync would go back and shrink the sparse file by just adding the switch and executing the command. I also added --one-file-system, for good measure. Here is what I ran next :
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "abcd.avi": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.6]
[code]....
Looking at the destination after a partial copy seems to indicate that the problem is not symbolic links being "expanded". I have not checked the source filesystem for sparse files, nor the destination to see if these files could be larger there, as this does not seem trivial.
Here is some additional info :
$ df /mnt/tmp/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb6 1415342836 1414173740 369096 100% /mnt/tmp
I've been looking high and low for a utility program or perl script or something that can take a linux directory structure as input and convert it to MS-DOS 8.3 directory structure.
The purpose of this is to conform to the path format that is expected on my rather old Creative Zen Neeon MP3 player for m3u play lists.
What is the correct way to copy a file or directory to another directory? In the past I was able to use press the mouse left and right button, it didn't work all the time hard to press the two buttons at the same time. With the Fedora 14 it does not work at all.
I'd like to copy a file, say widgets/water.txt, to all subfolders in the folder widgets using a single command. So if the folder widgets has 10 subfolders like widgets/blue, widgets/green, etc. I'd like to copy water.txt to all of them with one command.
I tried the commands
Code:
cp water.txt ./*/water.txt cp water.txt ./*/
However these don't seem to work. The latter gives 'cp: omitting directory' errors.
I need to recreate in a local folder called /distro/fedora/ the full directory tree (including eventually hidden files, symlinks, etc.) contained in the .iso file just downloaded (Fedora-15-i686-Live-Desktop.iso).
I understand I can mount the ISO image using something like this:
mount -ro loop /path/to/image.iso /mnt
but then, which would be the best way to get a copy exact of what I see underneath /mnt in to /distro/fedora ?
Create a copy of the file above and call it commands.sorted. Use the vi command to manually sort this file. I.e. use yy to copy a line, P or p to paste a line, and dd delete a line. Order the commands with the two lines starting with double quotes first. Then list the rest of the command in alphabetical order.
Anyone have any ideas what he's talking about? Can I copy a file and rename it at the same time while copying it to the same exact directory again? Now sure what the two lines things means either. I have an email out to him but it usually takes a long time for him to answer me. I got alot of work to do so everytime I get hung up it kills me.
I've the following file structure that I would like to add to git.
Code:
These are big directories and I don't need them all checked out. I only need the src directory. After I commit the files in the /app/src, it must be pushed to a remote site.
If I want only to checkout the src directory to work on, it's important to create a special file structure in git? For example, instead of doing git init on app general directory, should I do git init on all subdirectories?
Is it possible to checkout only part of a file structure in git?
I am performing a dry run using Rsync on 2 different boxes.While i'm doing that, Under destination directory, I want a specific directory x to be ignored for sync.Please let me know the exact pattern to ignore the directory.The current command I'm using is:rsync -avnc --delete $LOCAL_DIR $USERNAME@$DESTINATION_IP:$REMOTE_DIRunder DESTINATION_IP, I would want to ignore a particular directory under REMOTE_DIR.