I want to delete all may mail root in command line and i don't find this... the command mail + "d" work fine but i want use it in a .sh
I explain too : I use fetchmail to have mail from a gmail box, and use RIPMIME to save the attachment in a folder... these work fine, but the i want delete these mails.
Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?
I was creating a few scripts, I am trying to create a restore script to restore files that have previously been moved to a folder I selected. When moved I stored the original path for these files all in one text file.
I want to know if anyone can tell me how to delete the line from the file after using it restore the file. I have used the grep command to search through the file "pathName" to find where the file was stored but now want to delete that same line.
I have a log file which is continously being accessed. Now I want to delete the first line without disturbing the file.Is it possible? The Issue is the log file is being provisioned with ^@^@^@ characters in the first line occupying huge space.So I need to get rid of that. I dont have time to work with root cause but just a script to reduce this space.
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
Want to search for ~ and delete it as well as to append the entire line to the above line. For Ex:
1111xxxx date Sandy area is ~around this area.3222xxx date There seems to ~left side of map, the colours are accurate (showing green areas)Even if I ~zoom in, the green parks, xxx3258 date The dammed up ~away, the "other" body of water varies ~blackNatural gas leaching.
IT MUST LOOK LIKE:
1111xxxx date Sandy area is around this area. 3222xxx date There seems to left side of map, the colours are accurate (showing green areas)Even if I zoom in, the green parks, xxx3258 date The dammed up away, the "other" body of water varies blackNatural gas leaching.
I need to be able to convert HTML email messages saved as text files (.eml or .msg) to PDF documents, one PDF per email, retaining formatting and images.
Are there any Linux tools that will allow me to do this from the command line (so it can be scripted)?
I just installed Picasa from google and it has corrupted my picture database. The good thing is that it has done it in an organized manner. It appended -1 -2 -3 and so on to the copies file name. They look like (filename.jpg filename-1.jpg filename-2.jpg) the original having no numerical suffix just (filename.jpg)How do i write a command line to remove all of the undesirables without deleting the origionals?
Is there a way to remove duplicate files from a specific folder through SSH? I've uploaded a lot of flash games on my server and I can see in the Webmin's file manager that I have many duplicates. Their names are different, of course.
Let's say i have a link to a file http://www.domain.com/dir/myfile.ext
Is there a command line tool that will allow me to download this file. I'm looking for something like: download <http address> ... is there anything that simple?
I want to download a file from the Linux command line. Basically I'm using ssh and I'm trying to download a file to my file system on my laptop. How can I do that from the command line?
How can I create multipart rar file in Linux using the official console rar client?RAR 3.90 Copyright (c) 1993-2009 Alexander Roshal 16 Aug 2009Shareware version Type RAR -? for helpI want a multipart rar with each part size being 150 MB.
I have a jar, and I need to replace a class in it, at this moment, I can only open it with "archive manager" and then drag and drop the new compiled class into the jar, but I think this is really boring, if I can do with with just a command ?
I want to list all the files that don't have a copy with the same filename with -1 somewhere in it. So, in the example above, the results would be 3.png.
NB: the file and its copy with "-1" in it will be the same filesize, if that helps.
I've got a Debian Squeeze computer on which the graphics have packed up, but the terminal in single user mode work perfectly fine.
There are a few files on this Debian computer that I want to transfer off, to a networked computer, but I have no idea how to do this.
The destination computer is a freshly re-setup Mandriva install, without (as yet) samba. I don't think it's necessary though. The Mandriva install works fine, has graphics, etc, but can't see the Debian Squeeze computer on the network, possibly because it's in single user mode, thus prompting the problem of how to transfer the files, using only a command line.
In Linux, I'd like to know how to find the file(s) if any which as using a particular sector on the hard drive (ext2/3). There is a similar question here regarding Windows, however I need a Linux command line solution (this is a headless system).
I need to download a file from a website which has a URL formatted like:
[URL]
This redirects to a .zip file which has to be saved. There is also a need to authenticate based on username and password.
I tried to use wget, curl and lynx with no luck.
UPDATE:
wget doesnt work with redirection. It simply downloads the webpage instead of the zip file. curl gives the error "Maximum redirection exceeded > 50 " lynx also gives the same error.