i currently have hundreds of files all in a single directory. What I would like to do is create 8 subdirectories and move the files into the subdirectories based on the first character of the file name. Ideally, the script would omit any 'the' or 'a' and use the second word for filing purposes. No filenames have spaces. Instead they use periodsThe subdirectories will be:
I have very little linux experience. And need some help with a bash script. I need to a script I can set cron to run to sort files out of a holding folder into final folders. It doesn't necessarily have to be bash, but I think it would be sufficient for this. File names are formatted as such when created: Dest-Date-Time-CID-Destination# I want the files to be moved from a all in one holding folder to a folder structure like this.
So the script will need to make directories based on information in the file name which is delimited by single dashes. Then move files from the holding folder to the newly created "sorted" folders.
What options should I use when I'm using the sort command to sort the top 5 CPU processes (ps -eo user,pid,ppid,%cpu,%mem,fname | sort ??? | head -5) showing max to min usage?
I am using the diff command with the -r option, to compare a large number of files and files in subdirectories. My main interest is to find out which files have been changed, and not what the actual changes are, and since a lot of files has been changed, it would be a lot easier to view the file names only. Is there and option for diff that might do this, or does there exist a similar tool/command that could do the job?
I have a large directory tree with my ebooks and some of these files are zipped. I would like to move all of the zip files to another one so I can manipulate them. Since they are all scattered inside the tree, I would like to do it quickly and painfully with CLI. How should I proceed?
I've found several posts discussing how to do this in with the terminal, but none exactly fit what I am trying to do. And since I'm still very new, I was hoping for some help.
I have a parent directory called "Music." The subdirectories all start with "artist", some go further as in "artist/album/cd1". So right now the structure varies in the following ways code...
How can I move all the files (or the file types that I choose) to the parent directory "music"?
(By the way, for any who are interested, this is so that I can use an external hd with a PS3. ("playstation 3"--for anyone who was in my predicament searching the threads)
Two machines -- both 64-bit, both the latest Truecrypt installed, one Windows, one Linux. I created encrypted pendrive (using partition mode; not file-partition on regular partition) with FAT system on it. I copied several files/directories there -- using Windows. Then I unmounted it, mounted in Linux, but when I checked if everything is OK, it appeared that only the highest level of file hierarchy is visible -- i.e. only files and directories at mount point. Directories are seen as empty (in fact they are NOT empty). When I unmount pendrive and mount it again on Windows, all files/subdirectories are visible again. It is not an issue with older version of TC on Linux, because the version is the same. The question: how to make files in subdirectories visible on Linux?
In reading the rsync man page and browsing a lot of websites, I ended up a bit confused, or maybe it was just too much eggnog. Anyway, to exclude a directory "videos" with everything in it, which is /home/user1/camera/videos and I'm rsyncing the whole user1 directory to an external drive
I'm able to use the following to remove the target directory and recursively all of its subdirectories and contents. find '/target/directory/' -type d -name '*' -print0 | xargs -0 rm -rf
However, I do not want the target directory to be removed. How can I remove just the files in the target, the subdirectories, and their contents?
I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need.
I'm trying to setup an Apache server on my computer which will allow browsing of files in a specific directory and subdirectories, without needing any sort of authentication.
I've got the Apache2 server up and running through yast, and everything works fine as long as I try to point it to the /www/htdocs folder. However, I want to point it at another folder, which is on another partition. This partition is formatted as NTFS, if that matters at all (here's some background on some permissions issues I had with the NTFS partitions recently).
When I change the "Directory" setting in the Yast http server configuration utility to the directory on the NTFS partition I wish to use, attempting to access the server results in the following error:
Code: Access Forbidden: You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster.
Error 403 192.168.1.100 Mon Jun 13 23:43:29 2011 Apache/2.2.17 (Linux/SUSE)
say the file is named file1.txt After i do the following:
sort file1.txt | uniq >> file2.txt
I expect that letter a would only appear once, not in two rows, word about would also appear only once. However, I can't seem to get that result using this. I also tried the sort -f file1.txt | uniq >> file2.txt but to no avail. I actually got file1.txt from a messier file using the -f option in sort command.
How can I sort those IP addresses? I want to sort them using the first 3 numbers I also want to count the number of times that address is repeated This is a batch program.
I've been using linux for a long time, and I just ran into a problem that has me stumped. Any time I mistype a command, it says "Command not found."... yea, I know that's normal. But it doesn't return me to my # prompt. I have to press Ctrl+C to get back. code...
I know I do have one issue with this computer, I have 2 blown caps on my motherboard. This was a dual boot system, but after a virus with winblows, I decided to switch it to strictly linux. (roommates... *grumble*) I think I was running fc10 before I wiped the hd & installed fc12. Fc12 does seem to be running slower, and I still haven't got my sound card working properly... but that issue is for another topic... -YungBlood Reborn
I have multiple strings (eg. say two, firstLIST=(0 1 2) and secondLIST=(2 3)) and want to create a single string composed of their unique sorted elements. For the sample strings above, I'd like to build masterLIST=(0 1 2 3).I suppose I could write the elements of firstLIST and secondLIST to files
as this gives me a file populated with the elements I'm after, but I'm not sure how to read the elements back into masterLIST... and it doesn't seem "right" to create files to accomplish this. Is there a way to do this by manipulating the strings ${firstLIST[@]} and ${secondLIST[@]} directly? The closest I've come (not close at all) is
Code:
masterLIST=${firstLIST[@]}" "${secondLIST[@]}
but masterLIST built this way has only one element
I need a script that will take all the files in a given directory and create new monthly sub-directories and sort all the files based on the creation date into the appropriate directory.For example, all files created between 01/01/09 and 01/31/09 will be placed in 'JAN-2009'
I have some files, named E000.svg, E001.svg, E002.svg, ... E07D.svg, E07E.svg, E07F.svg . When I open them in Nautilus, and choose the option Sort Alphabetically, they are sorted:
E000.svg E00A.svg E00B.svg E00C.svg E00D.svg E00E.svg E00F.svg E001.svg E01A.svg E01B.svg and so on.
This is totally useless for me, so I searched for a solution in Google, and found lots of bug reports. It looks like it's a 'feature' for sorting 10 after 9; but clearly they have forgot that many people don't like or don't need that option. Thinking it was a Nautilus thing, I installed Thunar, but it has the exact useless algorithm. And then I installed PCManFM, but it's also the same. If I can install a file manager that correctly sorts my files? Or will I need to switch to Windows to have it fixed? (not joking here, Windows Explorer doesn't suffer from this feature).
Until now i haven't had to dabble with bash scripts.
I have a program that reads in data files. These are named datafile01_R, datafile01_G, datafile01_B, they then increment, so datafile02_R etc i have about 600 of these. the program reads in 3 data sets at a time from each run, so files_01 r, g, and b.
The program then does its magic, and outputs about 40 different files, depending on the file, they gone to folders named R, G, B, psa, or tracking.
The program itself has configuration files to say where the files should gone when analyzed, there is also the config files that reads in the data sets.
At the moment i have to run one set of data, then go in and manually change the input file location, and run again. But, doing this, even though a different data set, the new set overwrites the old set in one of the output folders. So i need a way to increment the output filenames after they are written and before the program is run again with the new data set.
I recovered some 60,000 files with PhotoRec and need a script to sort them into individual folders based on extension. I was able to do this once before but cannot find the script again (sad thing is that I probably saved it on another HD that I'm having partition issues with, but that's another story....).I found this script:
Code:
#!/bin/dash mkdir "$1" for file in *.$1; do mv "$file" "$1" done
While it does work, I am not looking forward to going through all 132 folders and typing in each extension. The last time I did it, the folder was automatically created based on the extension(s) found.
what i wanted to do was find all the files with a specific name from a tree, sort them by modification time and have their directory appended to them so that i knew where they were (because they all have the same name). i tried a whole bunch of different things and finally did this:
this did the trick pretty well, but as you can see it is far from elegant and i think i'm doing some things wrong and kludgy
first thing i tried was "ls -lRt | grep world.sav" which worked except i couldnt distinguish the files because there were no directories. that took a lot of looking till i accepted i couldnt make ls print directories as well and append them to the files somehow that their relationship would be clear. i tried piping ls to find, doing it in reverse, passing them from grep etc. etc. until i read some more stuff online that got me using gawk and sort. the questions:
1. is there some other, more elegant and simple way to do this kind of detection and sorting?
2. is there any way to use a pipe after using exec? the semicolon seems to prevent this entirely, forcing me to use an intermediate file as above. i could just remove it later, but i'd prefer a straight piping.
I'm trying to make another file annotation script a little speedier than it has been by the up-until-now proven method of checking the last four characters in a filename before the "dot" (eg .jpg, .psd) against a list of known IPTC categories and Exiv2 command files. It occurred to me that if one script generated a list of files in directory foo, and the same or another script sorted that list by that four-letter tag,then that list could be used(instead of a for/do/done loop on the real files in the folder) by the command-file-matching script to "vomit out" which annotator file would go with file nastynewfile.jpg, f'r'instance. The script I had been using for this task looks like this:
Code:
while read 'line'; do sp=$(echo $line) vc=$(echo $sp | cut -d"," -f1) cv=$(echo $sp | cut -d"," -f2)
[code]....
Where I seem to be stuck is with how to sort the lines in templist, which may be any number of different lengths, from back to front. sort -k looked promising, except it seems only to work the other way round. I thought of invoking a
Code:
q=$(expr length $line); echo $q n=$[q-8]; echo $n
kind of thing, but that presented the problems of how to sort by those, how to tell sort where to find them (grep?) and how to "stitch them back in" to the original list, which is what I want to sort in the first place.
I have hard drive with several thousand photos. These photos are in different formats, some are tif some jpg some raw (cr2). These files are in dozens of directories. What I want to do is produce a list of all the files, in all of the directories, sorted by the file name (not sorting on the path), listing the location, file name, size and date created. For instance I may have a file called photo1.jpg in /photos/pics/ I may also have a file called photo1.cr2 in /photos/misc/ and a file called photo1.tif in /photos/processed/summer/.
I would like a text file that would look like this: /photos/misc/photo1.cr2 2536658 2010-07-09 13:17 /photos/pics/photo1.jpg 320046 2010-07-07 14:47 /photos/processed/summer/photo1.tif 234456689 2010-07-10 09:22 Of course I want it to do this for all of the photos. I pretty sure that there is a way to do this with a minimum amount of work. I have no problem with using the command line.