I am working through a collection of many thousand photographs in many collections, spanning decades. One of the folders is a "Selected_images" folder which should only contain copies of images from the collections.
How can I check that the files in "Selected_images" are all indeed copies and have an identical counterpart somewhere in the collections hierarchy, not necessarily with the same name?
At present I am scanning the output of fdupes -r collection, which is tedious (although, in fact, no photographs should ever be duplicated except in "Selected_images").
What kind of method to find the duplicates files on linux,
1.how to find just using the file name, sometimes i figure out people often to copy their files to another directory and i want to find out if there any same file name in the linux box.
2. what about if i want to find the duplicate files based on contents of the file, example is in picture file if users store picture files from digital camera first they just save the file name in default but when they want to give that picture to others they will rename it, i've been used method md5 for this situation in python script but it takes long time
I'm asking this question just to know to use bash script a lot in work and i want to test out fdupes at home, is fdupes use similar md5 scan to find duplicate files?
Is there a way to remove duplicate files from a specific folder through SSH? I've uploaded a lot of flash games on my server and I can see in the Webmin's file manager that I have many duplicates. Their names are different, of course.
We have a huge amount of duplicate files in a folder and I would like some pointers on to writing a bash script to create a list of the duplicate files. I've seen examples that check for the md5 sum of files... but I dont need that, the file name is enough.
I have two directories, I want to know which files in the second directory also appear in the first and delete the duplicate in the second directory. Filenames might be different (so that rules out diff).
My problem is that various programs (such as fdupes and freedup) are very capable of finding duplicate files but randomly delete (or link) files from the first or the second directory.
Here an example with fdupes:
Code:
As you can see, the file in the third pair is removed from dir1 instead of from dir2. My aim is to have only files deleted from the dir1. I know that fdupes can't do this, as I emailed with the author.
I am using my Ubuntu machine to serve as a media server and network storage. The problem I have is iTunes on my desktop managed to make 2 copies of every song on the machine so instead of the 30GB I have its up to almost 100gb. I was wondering if there was a way to write a script to go through and delete the duplicates. The duplicates are the same filename as the original except a 1 or 2 following. Wasn't looking forward to deleting 12,000 files by hand.
What i am trying is to check the file duplication in a folder and remove a file if it is a duplicate of another file ie the contents are duplicate; but names may be same.
Basically i am using md5sum to calculate the md5sum values of each file and redirecting to a file. And i am thinking of comparing the md5sum values.But i am finding it hard to decide how to complete the code after redirecting the output of calculation of md5sum to a file.
I have two folders - Folder abc and Folder xyz which contains 1000's of files with few of them having the same file names. How can I remove the duplicates from Folder abc?
I have a 1TB drive that has MANY duplicate files all over it. a good linux tool that can find duplicate files on such a large drive (almost full) drive?
I would like to check two folders for duplicate files (two pretty old backup instances).
MY folders are quite alike so I would like to stop the NON-duplicated files for that I want to be able to do some checks not only for the filename but alsofor the filesize (might be the case that two files have same name but not size).
The ideal would be to suggest me such a program with gui but if not I will try run any script code that is available outhere.
Thanks y'all for the great script and explanation. This helped a lot in my own project. I thought I'd share the efforts.The project is this: I've got lots of duplicate JPGs from all the family members who've named the same photo with different names. Since md5sum generates a "fingerprint" based on the file contents, not the name, I want to use the md5sum of each jpg to uniquely name each photo and also remove exact duplicates.
It has the following flaws: 0) it doesn't handle certain non-alphanumerics 1) it keeps both photo-shopped and unaltered photos (different md5s) 2) it (currently) doesn't preserve descriptive filenames.
(For me, removal of duplicates is more important than keeping the filenames. I may change that to concatenate the md5 and the filename.)Please note that the commented "rename" command should be used to strip non-aphanumerics from the file names, and the script should be launched with the commented "find" command.
I am looking for an application (better kde one) that can search two external hard disks I have and find any duplicate files. I did some backups before to one disk which i copied few years ago to the other disk. Right now I would like some program to check files and tell me if there are the same.
I'm using a mac, and just transferred a bunch o photos from another computer, and as it turns out, there is a bunch of duplicates.I'm not too familiar with the mac terminal, but if there is a solution for linux, it will probably work for the mac.Just need to be able to recursively scan all folders in my Pictures folder and then Delete them.
I have a directory containing a ton of photos, some of which are duplicates but just with different names. Is there any way in linux to find all the duplicates and remove all of them except the most recent version? I know on Windows there are utilities that will do this through a GUI, but I'm using Linux through the CLI only.
I copied a back up of my windows 'my documents' fold and all of its' sub folders into my linux (Mint Debian) Documents directory. I found that many of my files can be found in more that one directory so, what I want to do is to find all the dups and deal with them. Is there a good linux application to resolve this 'duplicates' problem. (I don't want to touch the linux system files.)
I have just upped from lenny to squeeze. I didn't mean to, really, but the package manager was well into its stride by the time I realised what was happening. Mostly all went well, BUT /usr is now 100% full. I notice that there are duplicate files in /usr/lib, eg Oct 11 22:35 libgcj.so.10.0.0 and Sep 14 2008 libgcj.so.90.0.0 (I assume the latter has been replaced by the former?). Is it safe to remove the "outdated" lib files? Is there an elegant way of doing it?
Until now i haven't had to dabble with bash scripts.
I have a program that reads in data files. These are named datafile01_R, datafile01_G, datafile01_B, they then increment, so datafile02_R etc i have about 600 of these. the program reads in 3 data sets at a time from each run, so files_01 r, g, and b.
The program then does its magic, and outputs about 40 different files, depending on the file, they gone to folders named R, G, B, psa, or tracking.
The program itself has configuration files to say where the files should gone when analyzed, there is also the config files that reads in the data sets.
At the moment i have to run one set of data, then go in and manually change the input file location, and run again. But, doing this, even though a different data set, the new set overwrites the old set in one of the output folders. So i need a way to increment the output filenames after they are written and before the program is run again with the new data set.
i currently have hundreds of files all in a single directory. What I would like to do is create 8 subdirectories and move the files into the subdirectories based on the first character of the file name. Ideally, the script would omit any 'the' or 'a' and use the second word for filing purposes. No filenames have spaces. Instead they use periodsThe subdirectories will be:
I've got two external hard drives, a 2TB and a 320GB. I've recently come from Windows 7. On Windows 7 I wrote a batch file which checked whether both hard drives existed and then copied a couple of folders from the 2TB to the 320GB without overwriting. I've been trying to work out how to do the same under Linux without much luck. I've tried rsync but it looks like it overwrites. Does rsync overwrite?
Which basically checks for drive D:, checks for drive H: and then copies the contents of the folder on drive H to drive D and says no every time it asks to overwrite.
you can make a file that runs the contents as if typed out by the user in command prompt by saving the file as either .bat or .cmd . When using BASH in ubuntu, is it possible to save terminal executable files (from a text editor, like GEDIT), What file ending does it take, can i lay it out with just commands? (what is the syntax)