need a command or script to list all files recursive without directories one line per file, no extra lines like ls -AR1 should print file size and name eg.:
I am an uploader to a various hosts, so this tiny script me a lot. I make a rar archive and split files with 100mb. I could get 3-4 or even 76 parts of rar files and it would take me some time to paste all these urls to remote upload function of filehosting sites. For example:
Code:
server:/home/cober/downloads/teevee# ls -al total 358784 drwxrwxrwx 2 root root 4096 Dec 8 19:38 .
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
I'm writing a bash script to copy a list of files and do some stuff to them. Basically, I have the code written that does what it needs to do, but I can't quite understand why it works. I was hoping someone could clear up my understanding a bit.
Code:
The first line generates a list of files. I wrap each line in quotes because they usually have spaces in the directory names.
The second line changes IFS, and I understand what IFS itself does. What I don't quite get is what the separator becomes with that echo statement. If I'm reading that correctly, the backspace will remove the newline and essentially the result is nothing? I found this solution on a web page somewhere, but it was years old and there was no real explanation.
I'm trying to write a bash script that gets the list of files in a directory and puts them into a variable, then checks each entry and outputs them as follows:
item1 is a FILE item2 is a DIR item3 is a DIR etc etc.
I am able to get the list of files into a variable, but unsure how to get the output I want.
How to build a list of files under a directory that may have any permissible characters in the name, that is anything except NUL? The only possible (?) bash data structure to contain a list of such names is an array because NUL cannot be used as a list item separator so no X-separated list can safely be used; there is no "X" that might not be part of a file name. OK -- but how to populate such an array? Here's what I've tried.
Code:
#!/bin/bash # Set up test files dir=$(mktemp -d "/tmp/${0##*/}.XXXXXX") touch $dir/foo $dir/bar
AKA "zipping on the fly .. the slow-as-molasses way." The list includes full pathnames to each file, and they're all in subfolders of the same parent folder (which, unfortunately, is not the root folder of the drive or system on which the files reside). A cleaned-up and radio-ready portion of the list looks like
What I'd like to be able to do is zip all the files in the list into a single archive, to avoid the step of having to copy them to the same location (presumably another folder on the HD) and then zip that folder. I'm more inclined to make provisions about extracting to a single folder at some other time. Is this possible in BASH, or would I have to consider a faster, more robust scripting language such as python or perl?
So I was wondering, if I capture this output into a file (ie. one file per line), can anyone help me write a command which iterates through the file and moves the files one by one to a specified directory?
I am doing a spreadsheet for work (for importing data into a new database) and I have hundreds of image files which need to have just their file names in one column called Product Code. Is it possible to use the ls command to list the contents of a directory in one single column so that I can copy and paste to the spread sheet?
Also, is there a single command to remove the file extentions for a batch of files? Bulk rename is what I need, I guess but just to remove the file extension (.jpg on all of them). The normal use of ls lists them in multiple column form and when I copy and paste those it will not let me copy just one cloumn at a time. The spreadsheet only has three columns:
New Product Code Old Product Code Pictures New Product Code will be left black, and the Old Product Code is just the image names. The picture column will be the patch-to-the-image for each image. I am not sure that is even possible in a spreadsheet.
I want to delete all files within a specific folder without actually deleting the folder, what is a good bash command for this?. I found this one but encountered some errors even though I am executing it within the specific folder:
useratdebian:/home/user/folder# find . -type f -exec rm -rf {} ; [1] 5052 useratdebian:/home/user/folder# find: missing argument to `-exec' [1]+ Exit 1 find . -type f -exec rm -rf
The command as it appears is:
find . -type f -exec rm -rf {} ;
how to delete only the files contained within the folder called "folder" for example?
I am new to Linux and I am trying to convert image files to mpeg video I tried using this command images2mpg -o Vorticity.mpeg -i Vorticity-Magnitude0%d.jpeg I get an error as below bash: images2mpg: command not found
Ive been using linux for a while but I am just getting into shell scripting, im currently trying to get a simple script for finding and copying files powered by the command:
Code:
This works fine from the command line but when put in a script such as:
Code:
Code:
with the keyboard inputs for $fc1 and $fc2 being *.doc and ~/test respectivly. The only problem i can see is the xargs -ivar "var" part possibly needing $var to be defined?
I am trying to get this script to work. The purpose is to download a list of modules from the slax.org the list consist of a list of module numbers. What I am trying to do is Download the file or the file name corresponding to the number in the list.the list is comma delimited. this is what I have done so far and I am a stand still.
#!/bin/sh # Wget script to retrieve modules from slax.org modules # # ----Begin of user defined values ----- # Path to wget
I create a bash script that writes another bash file. But in the generated bash file I want to write a bash command in the file and not executing it.Here's my bash file:
Code: #!/bin/bash cat > ~/generateGridmix2data.sh << END
I just recently learned about the wonderful little lpr command- and using man -t (bash command) to beautifully print man pages for reference- but is there a way to print both sides of the paper using a printer so equipped?
Bash's command history is great, especially it is useful when adding the history -a command to the COMMAND_PROMPT.However, I'm wondering if there is a way to log the commands to a file as soon as the Return key is pressed, e.g. before starting the command and not on completion of the command (using the COMMAND_PROMPT option would save the command once the prompt is there again).
I read about auditing programs like snoopy and session recorder like script but I thought they're already too complex for the simple question I have. I guess that deactivating that script logs all the output of the command would lead already in the right direction but isn't there a quicker way to solve that probelm?
I have a file witch I need to list 10 line by 10 lines with something like press enter to go on in between. Well, the problem is that i have absolutely no idea on how to implement this.
I have a big bash script ,its goal is to download movie one by one . But I often get into a problem: if this script is executed in cron,it often does not completely download the movie.I often find the movies it downloaded are several KB while the movie is actually 20MB.So I think it is because it did not wait for finishing one task ,and jump to download another.So I want to know ,is there a way to force the bash script to wait until one movie downloaded completely and then start to download another movie ?