Is there a way to remove duplicate files from a specific folder through SSH? I've uploaded a lot of flash games on my server and I can see in the Webmin's file manager that I have many duplicates. Their names are different, of course.
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?
I used awk "'!x[$0]++' test.txt > file.new" ,but it deleted #1 also.I tried using uniq command but i didn't work. Can anyone Please let me know is there any way to do this using shell script.
I am using my Ubuntu machine to serve as a media server and network storage. The problem I have is iTunes on my desktop managed to make 2 copies of every song on the machine so instead of the 30GB I have its up to almost 100gb. I was wondering if there was a way to write a script to go through and delete the duplicates. The duplicates are the same filename as the original except a 1 or 2 following. Wasn't looking forward to deleting 12,000 files by hand.
I'm using a mac, and just transferred a bunch o photos from another computer, and as it turns out, there is a bunch of duplicates.I'm not too familiar with the mac terminal, but if there is a solution for linux, it will probably work for the mac.Just need to be able to recursively scan all folders in my Pictures folder and then Delete them.
I have a directory containing a ton of photos, some of which are duplicates but just with different names. Is there any way in linux to find all the duplicates and remove all of them except the most recent version? I know on Windows there are utilities that will do this through a GUI, but I'm using Linux through the CLI only.
In debian/ubuntu I want to: a) Create a list of all the files in one directory tree b) Do the same for a second directory tree c) Compare the two lists such that, only the file NAMES are compared (i.e. just comparing the "file.txt" part so that "/home/folder/file.txt" == "/home/secondfolder/folder/file.txt) d) Output a list of all the duplicates How to do this using scripting languages or regex or something?
I want to take a graphics file and make 10 copies of it to the same directory, each with 001, 002, or some such designation at the end of each file name so they have discrete files names. Is this possible using cp?
I just installed Picasa from google and it has corrupted my picture database. The good thing is that it has done it in an organized manner. It appended -1 -2 -3 and so on to the copies file name. They look like (filename.jpg filename-1.jpg filename-2.jpg) the original having no numerical suffix just (filename.jpg)How do i write a command line to remove all of the undesirables without deleting the origionals?
I want to delete all may mail root in command line and i don't find this... the command mail + "d" work fine but i want use it in a .sh
I explain too : I use fetchmail to have mail from a gmail box, and use RIPMIME to save the attachment in a folder... these work fine, but the i want delete these mails.
I need a PHP script to delete a line with certain pattern from all filesin a directory. The Directory contain files with extensions .js,.html and.php. Do any body give a working code snippet to Read all files in a directory with above extension and delete that line from the files.
Is there a method at the command line to copy files from one location to another and retain the source files group and user?I'm migrating some MySQL files from one machine to another.I want to back-up the original files in the directory presently. They have owner:group of mysql, some have owner:group root:mysql and so on. To copy them under cli or Nautilus everything changes to root for I execute sudo cp or gksudo nautilus and copy via gui.
Since it is MySQL data I could simply do a dump of the database and restore it on the other machine. But there's about 20 db's and I want to do this via a copy for it will be faster - at least that is what I think.
I couldn't find my previous posting about udev at booting, so I created a new posting with the same material. Subsequently my earlier post showed up again----or I managed to find it. I can't seem to find any information about how I go about deleting the earlier post. Supposedly we are supposed to be able to notify the managers of abusive posts. I thought I might use that to ask them to delete the earlier post, so I will try that..
I wanna delete a directory with its files and I wanna do that as follows: rm -r dirToDelete Unfortunately, I always get asked for EACH single file if I wanna delete this because it is write protected.... Is there a way to suppress this feedback message so that just the whole directory with its contents disappears?
i try do modify BASHRC and ENVIRONMENT files on directory ETCthen all the command don't work, such as:SUDO, GEDIT, NAUTILUS, NANO and some others!now i want to edit the 2 files and delete the insert lines
Want to search for ~ and delete it as well as to append the entire line to the above line. For Ex:
1111xxxx date Sandy area is ~around this area.3222xxx date There seems to ~left side of map, the colours are accurate (showing green areas)Even if I ~zoom in, the green parks, xxx3258 date The dammed up ~away, the "other" body of water varies ~blackNatural gas leaching.
IT MUST LOOK LIKE:
1111xxxx date Sandy area is around this area. 3222xxx date There seems to left side of map, the colours are accurate (showing green areas)Even if I zoom in, the green parks, xxx3258 date The dammed up away, the "other" body of water varies blackNatural gas leaching.
rying to find a way to generate a PDF file from a text file in a command mode (X is not installed). Is there a simple way to do that?I don't need anything fancy - no special formatting and no images to include, just simple text converted to a PDF format
I'm looking for a way to copy files with a certain file extension over to another folder. For exampleSource Folder: /home/user/downloadsFile Type: *.epubDestination Folder: /home/user/epubs/The downloads folder has several folders that may go as deep as 2 or 3 levels.I tried this but it didn't seem to work (and I'm not really sure what to do to modify it to get it to work).Quote:find . -maxdepth 1 -type f -exec grep -q "pattern" '{}' ';' -exec cp '{}' /path/to/destination
I have a laptop with Ubuntu 10.04 which appears to be completely wrecked. Even in recovery mode I can't get any GUI. The only access I can get is via command line, which with my limited knowledge of cd and ls commands shows me that the home directory is OK and all the data files present.
How can I get the data files off this machine, eg by copying them to a usb stick, or maybe by copying them to the Windows partition (which still works OK)?
how to download or upload files to a Debian machine using only the command line. I well aware of how to do it in GNOME, but seeing as how this is for a web server, I won't be using GNOME. I have a zip file on my personal machine that contains the website files that need to go on the Debian machine that is to be the web server, but I have no how to get it to that Debian machine without GNOME.
Some time ago I installed LAMP in my server, but now I need to execute .php files from the command-line (in order to execute some manteinance scripts for mediawiki). Seems that the PHP files running in the server are run thru some kind of "module" in apache2. Can I tell apache2 to run a .php file in command-line mode using that php module? Or should I install a fresh copy of php-5? Won't that interfere with apache or mangle the system?
command line, I have a server for work that I ssh into and I need to be able to find multiple files (they have the leading text just the date identifier changes) and then zip the files (with bzip) them and then finally scp(Secure copy) them to another server.
These files are always in the same directory and this is a daily task and just want to make into a script that I run once I am logged into the remote server.
I liked the idea of the "cosmos" screensaver/desktop, but wanted to add my own pictures to the application. I navigated to /usr/share/backgrounds/cosmos and tried to drag and drop. I quickly found that I did not have permission to do this.
I googled my problem and found some command line tutorials telling me to sudo cp. My problem is that I have about 30 pics that I want to move in there, and I don't think I can just move the directory, they have to be in that folder as the pictures themselves.
I don't really feel like typing the cp line multiple times with multiple randomly named image files.
Is there a way to have the command line cp all of my files from one directory to another?