Programming :: Script To Remove Duplicate Jpg Files
May 25, 2010
Thanks y'all for the great script and explanation. This helped a lot in my own project. I thought I'd share the efforts.The project is this: I've got lots of duplicate JPGs from all the family members who've named the same photo with different names. Since md5sum generates a "fingerprint" based on the file contents, not the name, I want to use the md5sum of each jpg to uniquely name each photo and also remove exact duplicates.
It has the following flaws:
0) it doesn't handle certain non-alphanumerics
1) it keeps both photo-shopped and unaltered photos (different md5s)
2) it (currently) doesn't preserve descriptive filenames.
(For me, removal of duplicates is more important than keeping the filenames. I may change that to concatenate the md5 and the filename.)Please note that the commented "rename" command should be used to strip non-aphanumerics from the file names, and the script should be launched with the commented "find" command.
Bear in mind that LIBS can be variable, I mean I need to drop any duplicate and only retain the last one of each different entry. And we must keep the order as is, I must not sort out them.
I have two folders - Folder abc and Folder xyz which contains 1000's of files with few of them having the same file names. How can I remove the duplicates from Folder abc?
I have a huge (over 10 gb) file with a list of IP's each followed by a corresponding number like this:
Code:
12.32.34.23 10 143.32.34.543 11 232.32.45.65 12 54.23.5.232 13 143.32.34.43 14 and so on..
I'm trying to sort this file numerically and weed out any duplicate IP addresses. How do I do this on bash? I have come up with this but obviously it does'nt work.
I have just upped from lenny to squeeze. I didn't mean to, really, but the package manager was well into its stride by the time I realised what was happening. Mostly all went well, BUT /usr is now 100% full. I notice that there are duplicate files in /usr/lib, eg Oct 11 22:35 libgcj.so.10.0.0 and Sep 14 2008 libgcj.so.90.0.0 (I assume the latter has been replaced by the former?). Is it safe to remove the "outdated" lib files? Is there an elegant way of doing it?
I have Fedora 11 installed-32bit-with xfce installed as the desktop. When I click on the fedora icon for the menu and select Preferences, there are 2 input methods listed even though I did not have any installed.Since there is no menu editor any more, does anybody know how to edit the menu so that I can get rid of these entries?
there is this unknown notification popup appear on top left of the screen other than the one on the panel. Anyone have experience on remove the top notification popup? this is my root account that i mainly use everyday, but if i created new account the top navigation not exist.
I did apt-get install qtcreator and it installed qt 4.5.3(qt4.5.2real) I had qt 4.5.2. If I go in Applications->programming I see 2 shortcuts for qtcreator, one of them being newer. How do I remove the older one? On another note, if I want to update Qt to 4.6 what would be the steps if I already have qt 4.5
Contained within each of these 67 text files is about 1 million urls. Yes. I have 67 text files that contain 1 million lines of urls each. I am sure I am swimming in duplicates. I tried opening one text file and clicking sort ----->remove duplicates. Now Gedit is not responding my processor is maxed out to 100% and I think I am finally ready to delve into some command line code. Can anyone give me idiot proof instructions on how to sort the duplicates out of each one of these 67 text files? How about no duplicates across all 67?
I would like to find a command which automatically finds and removes phrases which appear more than once in a text file. I still want to keep one of these phrases, but I only want to see one of them.
I am basically trying to remove duplicate words in my <title></title> tag after I got hit by Google Panda. I have around 750 .html files and it will be difficult for to me remove one by one. I am looking for a way to remove only from within <title> </title>
Example of a duplicate title I have:
Code:
<title>Pasta, Pasta Recipe and Pasta Guide</title>
I dont want to replace those words anywhere else in the file except for within the <title>
i have a big file of random numbers i generated at some point in time, after working with it with different things(how fun that was)... i want to remove duplicate lines and i'm not sure i'm doing this right
I was preparing a script which will remove all my files from directory which are 24 hour old.I tried some thing like thisfind . ( -name 'log.*' -mtime +1 ) -exec rm {}; but it is throughing error like : missing argument to exec.
Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
I hope you can help. I have a collection of spreadsheets with data that needs to be imported in to SQL. The data has been manually entered although there are portions where data has been copied and pasted from the web.
When converting these sheets to a CSV I get strange characters where it looks as though data has been copied and pasted. Is it possible to write a script (AWK?) to pull out these characters?
I guess the script will need to keep alpha characters, spaces, numerics and commas but nothing else. How easy is this to do?
The bad news comes that active support for Mint6 is set to end Apr. 30. The worse news is I don't know what to do about it. Complicating this is that I have about 5 drive partitions and duplicate Mint6 operating systems because of password problems and just partitioning the drive and rebooting the OS instead of trying to fix the issue. I hear good things about Mint8, but my 80 Gig drive is getting pretty thin on partitions. I know there must be a way to safely remove the partitions and duplicate operating systems. I just don't know how to do it.
However, the ffmpeg command generates a temporary file blahblah.mpg.tmp of about 1GB per hour of transcoded video.My issue is that I can't seem to delete these files automatically from any bash script.Now from the command line, I can cd to the directory and just rm -f *.tmp and they get deleted. However, from my script, that same command doesn't remove those files. I thought maybe the file was in use, so I put a sleep command in for like an hour before the delete happens, but it still fails. I also put rm -f /mnt/mythtv/*.tmp in a root cronjob and it still doesn't delete the files.
If I just rm *.tmp I do get a prompt about "Are you sure you want to delete this write protected file?". But the -f switch seems to work fine as a normal user from the command line and just delete them.Does anyone have an idea how to troubleshoot this problem? The particular filesystem that the tmp files get generated on is on it's own xfs partition mounted as /mnt/mythtv.
I want to delete all files within a specific folder without actually deleting the folder, what is a good bash command for this?. I found this one but encountered some errors even though I am executing it within the specific folder:
useratdebian:/home/user/folder# find . -type f -exec rm -rf {} ; [1] 5052 useratdebian:/home/user/folder# find: missing argument to `-exec' [1]+ Exit 1 find . -type f -exec rm -rf
The command as it appears is:
find . -type f -exec rm -rf {} ;
how to delete only the files contained within the folder called "folder" for example?
I want to remove duplicate or multiple similar lines from multiple files. I.e. if I have four files file1.txt file2.txt file3.txt and file4.txt and would like to find and remove similar lines from all these files keeping only one line from these similar lines. I only that uniq can be used to remove similar lines from a sorted file.
We have a huge amount of duplicate files in a folder and I would like some pointers on to writing a bash script to create a list of the duplicate files. I've seen examples that check for the md5 sum of files... but I dont need that, the file name is enough.
I have two directories, I want to know which files in the second directory also appear in the first and delete the duplicate in the second directory. Filenames might be different (so that rules out diff).
My problem is that various programs (such as fdupes and freedup) are very capable of finding duplicate files but randomly delete (or link) files from the first or the second directory.
Here an example with fdupes:
Code:
As you can see, the file in the third pair is removed from dir1 instead of from dir2. My aim is to have only files deleted from the dir1. I know that fdupes can't do this, as I emailed with the author.
I am working through a collection of many thousand photographs in many collections, spanning decades. One of the folders is a "Selected_images" folder which should only contain copies of images from the collections.
How can I check that the files in "Selected_images" are all indeed copies and have an identical counterpart somewhere in the collections hierarchy, not necessarily with the same name?
At present I am scanning the output of fdupes -r collection, which is tedious (although, in fact, no photographs should ever be duplicated except in "Selected_images").
What i am trying is to check the file duplication in a folder and remove a file if it is a duplicate of another file ie the contents are duplicate; but names may be same.
Basically i am using md5sum to calculate the md5sum values of each file and redirecting to a file. And i am thinking of comparing the md5sum values.But i am finding it hard to decide how to complete the code after redirecting the output of calculation of md5sum to a file.
I have a 1TB drive that has MANY duplicate files all over it. a good linux tool that can find duplicate files on such a large drive (almost full) drive?