General :: Searching A Directory And Pulling Out Filenames With A Certain Pattern?
May 17, 2010
I would like to search a specific directory and pull out filenames that have this pattern: "_bsc_" Then I want to do some processing and move the file to another directory
I have tried using likewise but I came across this yesterday. When you install Likewise only on a Linux, Unix, or Mac computer and not on Active Directory, you cannot associate a Likewise cell with an organizational unit, and thus you have no way to define a home directory shell in Active Directory for users who log on the computer with their domain credentials. I am trying to pull attributes from acitve directory.. namely the homeDirectory
I'm having a bit of an issue with Lucid installed via Wubi. I stuck the OS on its own partition (30 GB in size), and don't store any large files in the Ubuntu file system (when I download something large I move it to another hard drive.) I don't have anything wacky or esoteric installed on my system.
I've been consistently having a problem where, after a few hours or a few days of being booted up, Ubuntu warns me that my available HD space is dangerously small. The amount of available HD space Ubuntu sees then shrinks from a few GB to nothing within a few minutes, and the only way I can seem to solve this is to reboot. Taking a closer look at what's happening, my Home folder balloons in size until there's no more writable space recognized. But there are no files being created or added to, so it looks like there's a bug of some sort. This SEEMS to be correlated with watching videos (or maybe it's the pulling of large files from a mounted directory into RAM? My videos are all on another HD, as mentioned before). I can generally go a few days without getting the "low space" message, but I can't seem to make it through a full 2-hour movie without getting the error.
I have a directory that has a large number of files, around 1.5 million at this point. If I go to the directory and type in "ls filename" for a filename that I know exists, ls just hangs. I have let it run for over 20 minutes and it never does anything. Up until yesterday the directory was working fine through samba serving up files, but now it doesn't return anything. How to proceed from here?
I was trying to develop a script which needs to check the count of files on hourly basis and if it find any addition it has to sftp and send a email on the status with filenames and number of files copied via sftp. I will put it on cron to run every hour.
I'll use ls /abc|wc -l to count the no. of lines for the first time and from then whenever a new file will be inserted it'll copy that file to another location or I'll take the date of the files and whichever is having a new date that will be copied to another location.
I am writing a shell script that finds all files named <myFile> in a directory <dir> or any of its subdirectories, recursively. I also need to take care of symbolic links that may form cycles, to avoid infinite loops. I am not supposed to use find command for the same
I started writing the code but got stuck. I thought using recursion may be a smart way, but its not working.
I am a member of a group which has written a program whose source code is being held in a specific directory (~cs252/Assignments/basicAsst/project) and we want to go through and change the parameters for the function "sequentialInsert." My job is to find all occurances of the function call to "sequentialInsert" and to also list the files from where the code came from. Also, I have to be in the commandsAsst directory when I do this. I have tried grep and find combined together, and I am at a lost.
I want to copy all files with the name XYZ* into one folder. The problem is that the files are in different subfolders and that not even the depth of the folder structure is the same for all files. Luckily, at least each file has a unique name.
Of course, I thought about the cp command but I guess the depth of the folder structure needs to be the same for this to work.
I want to go through a log file and find pattern1 and then a pattern2 only after pattern 1.So for example I want to know howManyRecords was in 13:30.I figured I grep for "start time for the job" and then only after that (and before the next occurence of that) grep for "howManyRecords". Is this a sane way?
The (WD 320GB) drive has a single ext3 FS on it. It has had some problems in the past, but all were fixed with fsck -y. Now there are several directories with duplicate filenames. The files with duplicated names are hard links of each other, but the names are identical. I've run several diagnostics over them, looking for, eg, non-printing characters in the name, but they are completely identical. Here are some examples:
[code]....
These are (obviously) from a directory of mp3s, but similar duplications occur throughout the fs - there are several thousand files affected. Some of the diagnostics were programmes I wrote that accessed the directory itself (through the dirent structure). I always thought duplicate filenames in the same directory were impossible in unix/linux; this appears to prove me wrong. Am I missing something? (Kernel version 2.4.20 with xfs extensions. The installation was originally Red Hat 7, but I've changed almost everything, so it's probably more accurate to call it a custom distro.)
I am trying to search particular directory which has files with extensions like .html,.mp3,.xml etc I have a list of such files What I am doing in my script is
for file_name in `find /home/ -name index.html -o -name song.mp3 -o -name help.xml`; do if [ $file!='' ] then
[code]....
I have around 100+ files name with some particular extension , this code works fine if the directory name does not have any special character in it like " "(white character) .
It is failing to give the output. IF I run the find command on the console the I am getting the correct file name with location
I want to search a file for a particular pattern and if pattern found replace the line with new text. i am using awk 'match($0,"pattern") != 0 {print $0} ' filename to check if the pattern exists.how do i get the line number of the pattern and delete that line and replace the line with my new text?
I have to enhance the behaviour of a backup script written in perl. I don't need to change it, what I need to do is to create a bash script that does some checks like file name and file size, execute the backup script then check if the backup files match the original files.Here's how I try to do it:
- read the files from the original files folder - store them in an array - search in the array the files that have a specific file extension - store the file names that match the search pattern (I know the backup script skips some files so I can hardcode the search pattern) - run the backup script - read the files from the backup folder - store them in an array - compare the original files name and size stored in an array with those from the backup folder - send a report email
I would like to write a script that pulls the last line of data out of a txt file and then saves it to another txt file. The txt file that I am looking at resides here. [URL]
I know I can grab that file using wget. I've done a little scripting but nothing major.
I have several (vhdl) files containing a pattern with newline characters that I need to replace by another pattern that also contains newline characters.
I start with something like:
Code:
I want to replace it by something like:
Code:
(I need to paste some lines)
As I need to do this (very) often I want to use a shell script.
list filenames one-per-line, in BASH without including directories. I think he was either wrong or making that up. There is a way to list just the names and one per line but there aren't any arguments I can find that can be used to exclude directories.
Code:
IFS=', '; files=`ls -m`; for i in $files; do if [ -f $i ]; then echo $i; fi; done That does only use ls as a command, however he said his GSI thought he could do it without all that...
I am using Red hat linux .. i just wanted to know, is it possible to arrange or sort filenames numerically?i have saved several files with the follwing names : 1.png, 2.png, 3.png, 4.png ...... 11.png 12.png. and so on.... but the containing folder sorts this alphabetically in the following manner 11,12,13...... 1, 2, 3, and so on...
I am trying to synchronize the content of the directory my_dir/ from /home to /backup. This directory contains a file which name has a double quote in it, such as to"to. Here is my rsync command: rsync -Cazh /home/my_dir/ /backup/my_dir/
And I get the following message: rsync: mkstemp "/backup/my_dir/.to"to.d93PZr" failed: Invalid argument (22) For info, rsync works well when the synchronized filenames contain single quote, parenthesis and space. Thus, why is it bugging with a double quote?
I have a ton of files that are timestamped directories. These all look like2011-06-24_13.53.36 // a directory name for june 24th, 1:53:36 pmI have thousands of these directories. I want to do operations on some of the older ones. Let's say I give it a string for date time that matches that exact format, like i'll give it2011-06-25_00.00.00 // june 25th, 12amI want to find all the directories BEFORE my time. So if i give the string for 12am on june 25th, i want to find all the directories before then.If not i can find EVERY directory i have like this and then filter after wards. The created/modified dates are not tied to the actual timestamp im looking for (that would make this easier)
I tried to move a file from my desktop to another folder; moving it was not allowed, for some reason. Neither was opening it and saving a new copy in the target folder. Would that be because the filename contains double (") quotation marks? Are they not allowed? The filename is Edit of Bob's "Lady Liberty" Article.doc. [Filename not enclosed in quotation marks here, to avoid confusion.]I just changed the double quotation marks to single quotation marks; that solved everything.
I've been surfing and searching the net quit a while now to make my own script, but I haven't been really successful ever since I want to make a script which can remove strings from my mp3 collection (file names).
For example: Code: 101-bob_sinclar_feat_sean_paul-tik_tok_(radio_edit).mp3 --> bob_sinclar_feat_sean_paul-tik_tok_(radio_edit).mp3 10-Young Jeezy-Lose My Mind (78 Bpm) (Repack).mp3 --> young_jeezy-lose_my_mind.mp3
Now the problem is how can I remove the strings like: 101 & 10 (dynamic) (%%% Bpm) (dynamic) (Repack) (static)
I have a file that contains "ls -la" output. I would like to display only the filenames, none of the other information before it such as permissions, ownership, size, and date.Would the cut command be the best way to hit this, or should I use Vim or sed?
Having problems displaying French chars. They are dumped into an nfs share by a Windows/cifs configuration which has been blamed for this unwanted behavior but when I transfer a file continaing � via WinSCP to the RedHat, instead of getting the filename Response.txt I see R?sponse.txt. When I refresh WinSCP to view the file it views it ok.