Ubuntu :: Find Multiple Files In The Command Line?
Mar 2, 2010
command line, I have a server for work that I ssh into and I need to be able to find multiple files (they have the leading text just the date identifier changes) and then zip the files (with bzip) them and then finally scp(Secure copy) them to another server.
These files are always in the same directory and this is a daily task and just want to make into a script that I run once I am logged into the remote server.
I'm pretty sure this is doable from the command line, but my CLI skills have degraded a lot since my pre-Y2K admin days. The goal is to search all the files in the directory for a very long string of text and replace it with another string of text. The text being searched for is my Google Adsense code (which will be stripped from my website) and it will be replaced with a placeholder so I can easily tack something else in there in the future.
Seeing how I have that long snip of code on about 100 pages, automating the process would make life easier. If I was searching for a single word, I can see ways to do this. If I paste the code I'm searching for into a text file, is there a way to: find (contents of oldstring.txt) and replace with (contents of newstring.txt)?
I liked the idea of the "cosmos" screensaver/desktop, but wanted to add my own pictures to the application. I navigated to /usr/share/backgrounds/cosmos and tried to drag and drop. I quickly found that I did not have permission to do this.
I googled my problem and found some command line tutorials telling me to sudo cp. My problem is that I have about 30 pics that I want to move in there, and I don't think I can just move the directory, they have to be in that folder as the pictures themselves.
I don't really feel like typing the cp line multiple times with multiple randomly named image files.
Is there a way to have the command line cp all of my files from one directory to another?
Is there a way to specify to find that I only want text files (and not binary files)? Grep has an option to exclude binary files, so I thought find probably has a similar feature, but I've been unable to find it.
I am using ubuntu and mysql.I have a list of many .sql files, like 1.sql, 2.sql, 3.sql ... 100000.sqlI need to insert them into the database mysql mydb < *.sqlGives me: -bash: *.sql: ambiguous redirect
I want to (from the command line) be able to counte lines in a bunch of files of a specific type in a folder and all its sub-folders. How would I do this?
How do I find files in opensuse 11.2 without using the command line. I see in dolphin "nepomuksearch", but it doesn't work. Even in the command line you cannot whereis a file like Monday, Monday.mp3. whereis also seems to be case sensitive.
I've seen a few tutorials that have commands and parameters on multiple line, like the one below:
Code: chkconfig --levels 235 mysqld on /etc/init.d/mysqld start
I can copy and paste this in Putty, but what if I want to manually type it? If I press return, the first line gets processed, so how do I insert a new line?
I have an older computer with Arch installed that I want to use to accomplish most of my daily tasks using the command-line (Mailgrab, IRSSI, mpg123, Elinks, Vi, etc). I realize that there are many lightweight WMs out there that support multiple monitors, but it'd be nice if I could just use Screen or something to that effect to distribute my windows across two or three displays.
Is there any way to quickly remove multiple related packages from the command line instead of having to enter the name of every single one? I am trying to remove OpenOffice from my server running 10.04. It would work nicely if I could get a list of packages without line breaks, such as the list displayed by aptitude when upgrading. That way I could just paste the package list into the terminal. However, "aptitude search 'openoffice'" dumps a long list on many lines that cannot be used that way.
Is there a method at the command line to copy files from one location to another and retain the source files group and user?I'm migrating some MySQL files from one machine to another.I want to back-up the original files in the directory presently. They have owner:group of mysql, some have owner:group root:mysql and so on. To copy them under cli or Nautilus everything changes to root for I execute sudo cp or gksudo nautilus and copy via gui.
Since it is MySQL data I could simply do a dump of the database and restore it on the other machine. But there's about 20 db's and I want to do this via a copy for it will be faster - at least that is what I think.
So, I usually write/find a test case generator for any code that I write. This type of code generally leads to some file output. To be thorough, I try and generate many different files to test my code on.
Say the command is like this:
Is there a way to automate this for many different values of the parameters and generate many different files?
I tried:
I wasn't able to use the $i in the filename, and without it the command gave me no errors, but did nothing else either. I know the Unix command line is very powerful, and I have a feeling that this should be possible, but I just don't know how to do it.
I want to take a graphics file and make 10 copies of it to the same directory, each with 001, 002, or some such designation at the end of each file name so they have discrete files names. Is this possible using cp?
Hello, I need some help searching through multiple files, finding a line and replacing that line. The line I am searching for is:
password key ******* 1222554
ultimately I want to be able to delete the numbers after the asterisks . my thoughts are to create a script that will search for the line password key ******* and delete it then replace it with password key ******* my files are located in /opt and they are all txt files.
I searched the forum and didn't find any threads that seemed to answer this question. I have a large directory of files, and dozens of subdirectories on a remote box I have ssh access to. I need a subset of these files copied to another folder.
Example:
directories parent -sub1 -sub2 -sub3
files I want (the files are all the same format, but some have extensions and others dont) 1100 1215 1322 1442 1500 1512
Unfortunately, I need a lot of files, and plan to do this on a regular basis (the files I need will be different each time) I was thinking it would be nice to be able to put the filenames in a text file (one filename per line) and use the find command to copy the files (I don't necessarily know which subdirectory the file will be in).
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
I know how to search for normal files but can you let me know " How to search for 5 setuid files on the system. Also explain, for each file, why setuid mechanism is necessary for the command to function properly"
I am able to start up firefox just fine out of my terminal, but i have not been able to find any list of arguments that can be added in the command line. what i'm looking for is that it starts up in Full Screen mode right off. is there an argument that can be added to ti to do that?
The find command does not seem to find all files in my directory hierarchy. My home directory is automounted from a server. The command to illustrate this is:find | sed -e 's/^.///' | sed -e 's//.*//' | sort -uThe result misses several directories. Likewise, a find of a particular file, like:find . -iname *sample* -printwhere sample_file.txt resides in one of the directories that is missing in the first find command, finds nothing
I need to find each line in a file which does NOT begin with a double quote (") and append that line to the previous line. I have been successful doing this using the following command: cat filname.csv | sed -e :a -e '$!Ns/ [^"]//;ta -e 'P;D' > newfilename.csv
My issue is the substitution. As you would expect after the line is appended to the previous line the first character is removed. I need it to not be removed. I tried: cat filname.csv | sed -e :a -e '$!Ns/ [^"]/&/;ta -e 'P;D' > newfilename.csv but it just hangs.
Goal: Input: "line 1" line 2 Output with existing sed command is: line 1ine2 I need it to be line1line2.
So, in finishing my nFlux slack current edition.I have set it up for users to do certain things in console and one of the things I want is a way to view slackbook-2.0 in runlevel 3 console.I cant find a pdf reader that works in command line mode and I cant figure out how to either convert slackbook 2.0 pdf into html/text Or find a slackbook download that is html or text?I tried converting it using pdftotext, which didnt work very well So, I need a command line pdf viewer or a converter that works good?
i'm trying to setup my server box.. it's being setup as a web server, file server, and setup for me to be able to access it remotely (aka i do pc repair for windows users and it'd be nice to just know where ALL of my software tools are and get to them from there)anyways.. these things are almost all setup right now.. but the one thing i'm having issues with.. is the fact that this box does have 2 hard drives in it and i want to use both of them.. now i'm running straight command line and i can't find the info i need to reformat the second HDD (which is currently NTFS formated) and use it in this system... i'm running 9.04 as a server.. NO GUI INSTALLED! i need this with straight command line...What do i need to look for to figure this out? i'm having trouble figuring this out and it's really getting annoying..
I would like to find all the files that contains the strings I'm searching.
For example (it's just an example), I would like to search all the files in "/etc" that contains "eth0" and "us", whatever where are located those 2 strings, the important is that the 2 strings are in the files listed.
It would be something like a "grep -lr 'eth0' *" and "grep -lr 'us' *" but in one time/command, so that I don't have to make a comparison of the 2 list of files resulting from the 2 "grep" commands given higher.