Programming :: List Files In Bash + Add Url Before Them?
Dec 8, 2010
I am an uploader to a various hosts, so this tiny script me a lot. I make a rar archive and split files with 100mb. I could get 3-4 or even 76 parts of rar files and it would take me some time to paste all these urls to remote upload function of filehosting sites. For example:
Code:
server:/home/cober/downloads/teevee# ls -al
total 358784
drwxrwxrwx 2 root root 4096 Dec 8 19:38 .
How to build a list of files under a directory that may have any permissible characters in the name, that is anything except NUL? The only possible (?) bash data structure to contain a list of such names is an array because NUL cannot be used as a list item separator so no X-separated list can safely be used; there is no "X" that might not be part of a file name. OK -- but how to populate such an array? Here's what I've tried.
Code:
#!/bin/bash # Set up test files dir=$(mktemp -d "/tmp/${0##*/}.XXXXXX") touch $dir/foo $dir/bar
AKA "zipping on the fly .. the slow-as-molasses way." The list includes full pathnames to each file, and they're all in subfolders of the same parent folder (which, unfortunately, is not the root folder of the drive or system on which the files reside). A cleaned-up and radio-ready portion of the list looks like
What I'd like to be able to do is zip all the files in the list into a single archive, to avoid the step of having to copy them to the same location (presumably another folder on the HD) and then zip that folder. I'm more inclined to make provisions about extracting to a single folder at some other time. Is this possible in BASH, or would I have to consider a faster, more robust scripting language such as python or perl?
I am trying to get this script to work. The purpose is to download a list of modules from the slax.org the list consist of a list of module numbers. What I am trying to do is Download the file or the file name corresponding to the number in the list.the list is comma delimited. this is what I have done so far and I am a stand still.
#!/bin/sh # Wget script to retrieve modules from slax.org modules # # ----Begin of user defined values ----- # Path to wget
I have a file witch I need to list 10 line by 10 lines with something like press enter to go on in between. Well, the problem is that i have absolutely no idea on how to implement this.
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
I'm writing a bash script to copy a list of files and do some stuff to them. Basically, I have the code written that does what it needs to do, but I can't quite understand why it works. I was hoping someone could clear up my understanding a bit.
Code:
The first line generates a list of files. I wrap each line in quotes because they usually have spaces in the directory names.
The second line changes IFS, and I understand what IFS itself does. What I don't quite get is what the separator becomes with that echo statement. If I'm reading that correctly, the backspace will remove the newline and essentially the result is nothing? I found this solution on a web page somewhere, but it was years old and there was no real explanation.
need a command or script to list all files recursive without directories one line per file, no extra lines like ls -AR1 should print file size and name eg.:
I'm trying to write a bash script that gets the list of files in a directory and puts them into a variable, then checks each entry and outputs them as follows:
item1 is a FILE item2 is a DIR item3 is a DIR etc etc.
I am able to get the list of files into a variable, but unsure how to get the output I want.
I'm trying to use zenity in a bash script to display a .csv file using '--list' to allow the user to edit some of the values.I can display it fine but i'm unsure how to edit the data? all i can get is whichever line is highlited when hitting ok on the zenity dialog to print.the data in the csv is arranged:
I'm not sure if this is best done in Perl or Bash. I'm thinking surely someone else has created something close to what I'm looking for. The results of the script would be that someone would kick off "linux_hosts.sh" r whatever you want to call it, then a top "folder" of options (with hosts contained within each of these top menu choices), then, based on which number corresponds to that top level, they're presented with a set of linux hosts that are relevant to that top level name. Example:
$ linux_hosts.sh 1. VMware hosts 4. Private Domain 2. ESX servers 5. Red Hat boxes
I need to rename the resulted searched files from a loopI have the following code:
find . -name DOC* | while read i do find $i -type f -name '*.txt' done
basically, I am searching for all txt files inside any folder starting with DOC name.this code is working fine with me.I need to rename those .txt files to .txtOLDOS: Ubuntu 10.4Bash shell
Is there a way, preferably in python or BASH, to rename files from a list? for instance, track1.mp3, track2.mp3 should be renamed to the names stored in a file listing song names. I have tried to loop a variable through directory listing and renamed them, only to find that filenames with spaces can't be assigned to a variable as a whole. To solve the problem above, I have tried the read command in BASH, which enables the program reading line by line from a list. However, It was failed to pipe the results from directory listing to the read command.
I need to, through a bash script, go through a given directory (given as argument 1) to list out the relative path in this directory (including $1) for eact subdirectory which contains files. Directories which only contain . .. and eventually only subdirectories SHALL NOT be listed. It is this last requirement that makes it difficult for me.
I have been using the tree command for now, but I have not found a way to ignore paths to directories which only contains other subdirs or nothing at all in any easy way. I may offcourse test each directory after they are listed but this gives an extra loop to go through and I beleive it should be possible to do it directly when creatring the list. I guess by using find or ls in conjuntion with the tree command or by itself it should be possible but I am not to conversant of nested script commands.
I'm trying to make another file annotation script a little speedier than it has been by the up-until-now proven method of checking the last four characters in a filename before the "dot" (eg .jpg, .psd) against a list of known IPTC categories and Exiv2 command files. It occurred to me that if one script generated a list of files in directory foo, and the same or another script sorted that list by that four-letter tag,then that list could be used(instead of a for/do/done loop on the real files in the folder) by the command-file-matching script to "vomit out" which annotator file would go with file nastynewfile.jpg, f'r'instance. The script I had been using for this task looks like this:
Code:
while read 'line'; do sp=$(echo $line) vc=$(echo $sp | cut -d"," -f1) cv=$(echo $sp | cut -d"," -f2)
[code]....
Where I seem to be stuck is with how to sort the lines in templist, which may be any number of different lengths, from back to front. sort -k looked promising, except it seems only to work the other way round. I thought of invoking a
Code:
q=$(expr length $line); echo $q n=$[q-8]; echo $n
kind of thing, but that presented the problems of how to sort by those, how to tell sort where to find them (grep?) and how to "stitch them back in" to the original list, which is what I want to sort in the first place.
I've got a problem with a piece of code. Basically, I use my listRegularFiles function in two separate places in my code. The first time I run itit appears to work perfectly well. If I use it a second time, however, it blows a gasket. I'll post my code below, and if anybody has any ideas,Here's the code for listRegularFiles:
I want to make a program that maintains a list of tags that can be attached to a set of files. Store the tags in the files. The main problem is that there is no way to get a list of all the tags without reading each and every file. And also what if you have an unused tag? Have a file that contains tag "keys" and file list "values". This seems like it would be fast and effective, but what if one of the files gets renamed?
What I want to do is to create a script that will interpret the following string and save into variables part of its name
m02_+1+7_London_0000$01.cfg as ------X-Y--City--------- X=1 Y=7 City=London
[code]....
then I want to copy the files that go all the files with the same City and X and Y to the same subfolder City/MX.Y I will need some help start doing that. And I think the first would be to get part of the filenames strings into variables.
I am trying to find a nightly backup if it was successfully copied over, rename it and curl, but it's always passing the check even if the file is older than specified. From the command line it does as it should. Example is here;
Code: find /backup -type f -mmin +4440 -exec echo "found" {} ; - nothing returned (good). Then I change the time
I'm making a small script for searching and doing some operations with photos, but I'm kinda stuck on this little function:
Code:
function findallformat { prefix="" if [ $1 = -pre ] then
[code]....
That function should find for every file with a certain type; and you can specify a prefix using a "-pre" followed by the prefix that you want to search. The format should be "stackable", so you can use as many types that you want, without repeating the same function on the code.
Example: findallformat -pre IMG_ .JPG .CR2 #That should search files that start with "IMG_" and finishes with .JPG and .CR2. My problem it's that, when I try to use it on the script, it says "bash: syntax error near `token' unexpected `}'"
I'm trying to rename a lot of files getting rid of the space on the names. For that purpose I wrote this very simple bash script, but for some reason is not working.
Code: for i in "$(ls)" do j=$(echo "$i" | sed 's/ /_/g') mv "$i" "$j"
done But what I get in return for each line is just one long file name with all the file names concatenated. I've tried with echo -e "$i" as well with no results. This has to be something really simple that I'm missing but I just can't see it.
I am trying to write a simple script to list all the files in a directory. The script I wrote was as below where the pdb_files is a directory and all the files which I want to list are in that folder.
Code: files=`ls -F pdb_files/*THERMO*` for inFiles in $files do echo $inFiles
I have around 600 empty text files that I need to add the name of this file as part of the data, I meanfiles from "file1.txt to "file599.txt, all of them empty, and I need to get the name inside the file, so, when I open the file show the name as part the data "file1".these files were created on my web site, I am thinking in a small script in bash