General :: Printing From Bash Shell / Concatenate Files Into One File With File Names Included?
May 11, 2011
I am supposed to take some small files, and print them to a specific printer, such that the small files are concatenated into one file. The file name has to be included in the file that gets printed.
Should I be looking to concatenate the files into one file with the file names included, and then print them?
In my script, and I would like to concatenate 2 variables names, to give me the true variable.I've 3 variables X1, X2 and X3, and I invoked them inside a for loop.
I just installed Fedora12 in a Core i3 machine... everything looks fine, but I have a huge problem... every time I upload a file (using ftp or sftp) some wier characters are included inside the file... for example.
I understand the tilde (~) at the end of a file displayed in bash is a backup file in the Linux file system. Is there a way to keep these hidden when listing the contents of a directory?
Im trying to compare two files and I only want to display the user names that are in the first file and not the second.
So I have one file named final.txt (which contains every user name and only the user names in a list no other information)
Then I have another file Over1.txt (which only contains certain users that have different permissions This file is also setup differently with the user name and some information about the user after the user name.
I need a way to compare final.txt to over1.txt so that I will only display the names that are in final.txt but not Over1.txt
Ive tried using diff and comm but just cant seem to get it two work correctly. Im not sure if im missing a option or what.
I have a laptop that I am in through SSH. The laptop does not have an Xwindow system so I am using the program fbi to open an image on my laptop screen from my SSH connection:
fbi -T 8 picture.jpg #this opens the image on the laptops tty8 terminal
I've found that making a for loop does not work with files that contain a space in the name. Something to due with a bug that they call a "feature" that stops the first variable at the first whitespace.
Using a "while" loop is not exactly what i require either seeing as I want to be able to view each image in the directory on screen and tag it accordingly, before it jumps off to the next image, and I'm not sure how to add a pause to a while loop.
How do I make a Bash script and loop Variables handle files like "files that contain spaces.jpg"
I have bash script for converting files. I have a problem. If file name is "corrupted" then mv command for that file will not work. For example file with "-" in front of the name.
Is there a way to check if in some folder (subfolder) all the files have correct file names or they don't?
If they are all correct -> OK proceed with execution of the script!
If they are not all correct -> NOT OK stop with execution of the script!
Im writing my first bash script. Its function is to move files to the trash can and write a log file in the same format that the system does to allow for file restoration. The problem is that in bash, everything works fine, but in the OpenBox window session, the files are named after the source directory, not the original name. Heres the script:
Code: #!/bin/bash # trash - Script to move file or folder to the trash can and create a log file ##### Functions ##### err_output () # Writes error message { echo "$0: cannot stat `$1': No such file or directory" echo "USAGE: $0 SOURCE DEST" exit 1 } >&2
if [ -e "${DEST}/${FILE}" ]; then max=0 DIR="$(pwd)" cd "$DEST" shopt -s nullglob for backup in "${FILE}."; do nr=${backup#${FILE}.} if [[ "$nr" =~ ^[0-9]+$ ]]; then if (( nr>max )); then max="$nr" fi fi done cd "$DIR" max=$(( max + 1 )) write_log_numbered mv -- "$SOURCE" "${DEST}/${FILE}.$max" else write_log_unique mv -- "$SOURCE" "$DEST/${FILE}" fi
So I run the script with the test file "Junk". In bash, it moves over and its named correctly. Code: ~/.local/share/Trash/files$ ls file file.1 Files Files.1 Junk The log file is also named correctly
Code: ~/.local/share/Trash/info$ ls file.1.trashinfo Files.1.trashinfo Files.trashinfo file.trashinfo Junk.trashinfo
But, when I go to view the trash can in the file manager in Openbox, the file is called "Testing" which is the name of the source directory. However, if I go to the trashcan via its full path (going to .local/, then share/) all the files are named correctly. Whats going on here? Is there some way to get the trash can to read the correct file name?
1. Every Sunday2. Find all files older than 1 day3. Gzip these file4. Tar up the gzipped files into one tar file.5. Name the tarball with a date stamp indicating what day it was created, so we know that week's files are in the file
Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? What doesn't work: detaching from the screen session (doesn't respond until file is done outputting) trying command to move to a different window in the screen session (also doesn't respond) typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up).
I have a considerable number of files in a subdirectory (some fascinating old military clips from archive.org - search on Big Picture if interested). Anyhow, I am downloading them using Internet Download Manager running in an XP virtual machine in VMWare on my Ubuntu 10.04 PC (due to the queuing, restart and speed capabilities of IDM). But I digress - the files are being saved on the host (Samba share) without a file extension. So I have a collection of files with names like
Quote:
The Douglas MacArthur Story THEY WERE THERE (1960)
I wish to add the extension ".mp4" In Windows this is simply done with the command
Quote:
rename *. *.mp4
This of course does not work in Linux. I have researched the Linux rename command and reviewed a lot of examples. However, I have not found a way to add an extension to a batch of files which are named with no extension to start with. The spaces in the file names also seem to present an issue. At the moment I am renaming them from the Windows VM while they are sitting on the Samba share using the ancient File Manager program from Windows NT which works great on XP. I have experimented with the file rename facility in Gnome Commander however, it does not seem to want to do something so simple.
has anyone used some software tool for copying a file from one location to another (I mean local files - for example from one folder to another), which prompts if you already have this file, or a similar one...I'm going to use it for my file archive ... mostly for my MP3For example, I might have the folder /home/user/MP3/Heavy Metal/Old/Downloaded/Metallicabut have forgotten that I already have Metallica in this folder and now I want to copy my new music collection to my archive folder which contains for example this folder:MP3/Rock/MetAllicA-Full-DiscographyI need copy files software which will tell me:"You already have folder with similar name to 'MetAllicA-Full-Discography' called 'Metallica', do you want to skip this folder, or copy it to location 'MP3/Heavy Metal/Old/Downloaded/'?"This way I will reduce the file redundancy in my file archive, or at least will keep similar items close to each other ..
I have a script almost working except for 1 thing. What I'm trying to do is read a file that has the files that need to be FTP'd using a bash script. I have everything working except the reading of the file. It works outside of the ftp script I've wrote but once I put it in the FTP script it doesn't.
Here's the Script:
#Here's where the problem is that I know of
I've been playing w/ the exclamation points to see if that could be the problem, but so far no luck.
I am looking for a script/advice or guidance on how to write a script so that when I use the 'del' command it removes/sends the files/folders to a I specify for example 'dustbin
I have a file like below. For all the lines (except for the ones listed as 'Unknown Owner' and N/A') I would like to change to lower case and concatenate the first and last names.Before:
Code: aaa.bbb.ccc.ddd,Unknown Owner ddd.eee.fff.ggg,N/A hhh.iii.jjj.kkk,John Doe aaa.bbb.ccc.ddd,Mary Jane
I have a bunch of .7z files in a directory, and I need to put each one of them into a separate directory, named after the file (without extention). The command line I use:
I need a shell script that will add the users name and date to a file when the user has modified the file, these files are within a group and only accessible to this group. But we need a way for people in the group to know who and when the file was last modified.
i have text file that filename contain the date of creation (i.e 2010.05.02.log).I would like to create a script that:-Ask for start date -Ask for end date- Concatenate all file on the requested period by date order.
how come I can create a shell script file with two functions, I can execute the file, but when running declare -f, the functions are not on memory, and when invoking the function bash returns invalid. In the other hand, I can copy & paste the two functions at the end of my /etc/bashrc file.... then I can called the function by name.... and the commands within that function run on my session. here is a print of all my bash packets:
[Code]....
Does Fedora has restrictions on shell scripting? I haven't touch bash in seven years, so if things have change on it I'm behind on it, and sorry for my ignorance.
I am looking for an application that will read the file names in a folder and generate a comma delimited file. I want then to import the comma delimited file contests to a spread sheet such as open office.I hava a number of PDF files generated from a scanner, each file with its own scaner generated file name. I want to put these into a data base so I can add the title and other reference information to provide a data base.
I recently upgraded to Ubuntu 11 and a few days later my ecryptfs filesystem began misbehaving in a weird way. In my home directory, many subdirectory names are duplicated verbatim. Here's an ls -F excerpt:
I can no longer access files in those directories (if I ls the directory, it appears empty; I can cd to it, but there's nothing inside). Not all of the directories are duplicated/damaged like this, but most are. A few non-directory files are also duplicated in this fashion, so for example:
I know that ImageMagick's convert program can be used as follows to convert a collection of images -- say, in PNG format -- to a PDF file:
convert *png output.pdf
The problem with this is that each image is then stretched to fit on one page, whereas I would like to keep the original dimensions of the images and put as many as possible on one page in the PDF file before moving on to another page.
In case that NFS client of fedora is set to cache nfs file handle, where is the cache stored such as /var/lib/nfs?If it's stored in memory instead of directory tree, what command can I know the combination of file names and file handles with?OS in my test environment: Fedora Core 6
I'm using Grsync and I want to be able to plug in any drive into my laptop and run rsync on it to back up all the user documents on there to another external hdd and to exclude everything else. Working on the principle that user documents don't always appear where we'd expect I want rsync to look through the whole drive and filter what it backs up by file type. I am only having partial success, however.
I am using the 'filter' option in the 'additional options' box. I am using the command Code: filter='merge /home/tim/Desktop/filter' and I am attaching the filter file I have written. (I have added the .txt extention to upload it).
I have tested this script on my home folder and here's what's going wrong. Rsync will copy the entire directory structure regardless of whether there are any files to be copied over in those directories. I am also getting only some file types getting included and not others. .odt and .ods files are copied, for instance, but not .doc or .rtf.