General :: Find A File In Directories Without Using Find Command?
Aug 3, 2010
am new to linux and trying to find a file in sub directories using find command as:find .-name *.jpg -type fBut I am unable to get the result as find command is not permitted by the server administrator.Is there any way to find files without using find command.
I'm having problems figuring out the process to find directories that DO NOT contain a certain file. I have a mp3 collection that all the album art is name "folder.jpg". Not all the albums have images. I need a way to find the albums/directories that do not contain "folder.jpg". I can find the ones that do contain "folder.jpg" with
I have 4 Linux machines with cluster.My target is to find all kind of IP address (xxx.xxx.xxx.xxx) in every file in the linux system remark: need to scan each file in the linux system and verify if the file include IP address if yes need to print the IP as the following
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
Is there a way to specify to find that I only want text files (and not binary files)? Grep has an option to exclude binary files, so I thought find probably has a similar feature, but I've been unable to find it.
I know how to search for normal files but can you let me know " How to search for 5 setuid files on the system. Also explain, for each file, why setuid mechanism is necessary for the command to function properly"
I want to search file excluding the NFS ...find / -mount -name 'filename' restricts the search only in the root disc partition,but the file can be in other partitions alsoIs there any way to exclude the NFS only.
I have an SQL dump, file.sql that has many references to a particular domain, d1.com. I would like to run a command that can replace every occurrence of d1.com with d2.com. I've tried looking into sed before but the man pages are quite daunting.
I'm relatively experienced with UNIX and Linux, but this has me thrown for quite a loop, and it seemed like such a simple question. How would I go about finding the newest file in a file system? I thought something like:
Code:
ls -ltr `find /usr -type f`
would work, but I seem to be exceeding the argument maximum for ls:
ksh: 0403-029 There is not enough memory available now
I thought something involving xargs might work, but I really suck with that command.
find /var/spool/mqueue -group abc -exec rm -rf {} ;Using above command , I delete all the files belong to group abc.Now the problem i face is that the this command gives error that some files are missing . And this error occur because after creating list of files, it pass that list to rm -rf but till that time sendmail process queue and some of files disapper from /var/spool/mqueue.
when loggin as a normal user and search for a file passwd under /etc. i get few errors with permission denied.how to ignore this permission denied errors.
I found a script on webmaster world that mostly does what I need it to, but have been making modifications to tailor it to my specific needs.I know that //..*/ tells awk to ignore hidden directories, how do I define more directories to ignore? (i.e. temp, var, etc)? I've tried playing with prune before the awk command with limited success...I know that there are many ways to do the same thing and keep running into brick walls.
I have a question which has been in part answered many times but nothing I found relateds completely to my situation. I am sure there will be people who will say RTFM but believe me I did, and searched as well but to no avail. I have a situation where I want to copy files created withing last hour in one directory into another one. The problem is that that the directories are on different levels in the dir tree so the absolute path is different. But I want to keep the relative path the same.
I want to copy new files from /mnt/path_to_webdav/user to /home/user. so if there is new file /mnt/path_to_webdav/user/doc/xy.txt I want it to be copied to /home/user/doc/xy.txt. Also if there is a new dir, say /mnt/path_to_webdav/user/newdir I want a new dir to be created in /home/user/newdir with all the files in it, should there be any. I can do find with exec and copy all the files into one directory.This is not what I want though. How do I preserve the relative path and get the files copied into their corresponding directories?
I want to scan a particular directory recursively and run a particular command with each file as input. For this I am using "find /dir/path". I dont want to write any long script containing loop on the output of "find". I want a single command which will allow me to run a command on each file of the "find" command output.
I need to strip the executable flag from all files within a certain directory and sub directories. Right now I'm doing it with a 2 step process
find /dir/ -type f -exec chmod ugo-x {} ; find /dir/ -type d -exec chmod ugo+rx {} ;
Is it possible to modify the first line so that I can strip exec flag from all non-directory files? Since this needs to be done on a fairly regular basis across a lot of directories and files, I'd prefer not to use a bash script which would slow it down.
I want to move all files and directories that are 1 month old out to back up into a separate folder. There will be a lot of files and I want to make sure it copies properly. The problem I'm having is integrating a MD5SUM into it to check integrity. MD5SUM is not recursive, so I figured it would work in a loop when it copies each individual file, I'll do a md5sum on each file and delete that md5 once its verified it copied ok.
[Code]...
I also need some sort of error handling to output all md5's that didnt pass the hash check.
The find command does not seem to find all files in my directory hierarchy. My home directory is automounted from a server. The command to illustrate this is:find | sed -e 's/^.///' | sed -e 's//.*//' | sort -uThe result misses several directories. Likewise, a find of a particular file, like:find . -iname *sample* -printwhere sample_file.txt resides in one of the directories that is missing in the first find command, finds nothing
I'm having problems with Tomboy. I have a few hundred note files and I need to go through all of them and replace all instances of "<link:broken>a</link:broken>" with "a". Is there a bash command I can use to do this?
I need to know a command that is able find the hight and width of a video. I know in VLC you can go into the Codec details but I need something in the command line because I have hundreds of files that I need to organize. I could not find any VLC command that does what I want to do.
I have a file with tens of thousands of lines. I need to remove specific letters eg eggs, from every line that has the letters in it. Is there a command which can help me do that easily?
I wanted to find and replace a string from a perl file. I have written a script in bash which runs the following command.
perl -pi -e "s/$findstring/$replacestring/" testfile where as $findstring = print F_WC_TMP"$line "; and $replaceString = $line = join ' ', split ' ', $line; print F_WC_TMP"$line ";
But when I am running the above command, i think it is replacing the $findstring with the above mentioned string and hence it contains a $line, it is looking for the variable $line and not finding the exact string. I am confused about how to search for a string that contains $ in it and replace it with another $string.