Programming :: Bash - Redirect All Subsequent Std Output To File?
Feb 11, 2010
I have got a script with an outer and inner loop. The inner loop issues loads of echo's which need to be redirected to a log file determined by the outer loop. The obvious solution is to redirect every echo to >$LOG and set LOG in the outer loop.
Code:
for f in $FILES ; do
LOG=<logfile>
for l in $LINES ; do
[code]....
it is possible to map stdout to $LOG in the outer loop without having to redirect every subsequent individual command output?
I am trying to grep multiple numbers from file, grep does have the -f option for that.
Code: grep -f <`seq 500 520` /etc/passwd I know this could be done with
Code: for i in `seq 500 520`; do grep "$i" /etc/passwd; done But my question is fare more behind this example. It is possible to redirect one command output which will be treat as a content of file for another command ?
I have a set of bash scripts that I'm running that automatically build a set of packages for me and redirect their output into logs. Basically, I have a bunch of lines that are something like this: ${CONFIGURE_DIR}/configure &> ${LOG_DIR}/log or cd ${CONFIGURE_DIR} && make &> ${LOG_DIR}/log, etc.
This is supposed to make the entire process silent. However, sometimes with some packages some output leaks to my console (either stdout or stderr). I'm thinking that maybe the configure scripts/make are executing commands within new shell instances that don't inherit my redirect, or something to that effect.
Another reason for thinking this is that in another part of my script I detect errors when running make by testing with "if [ $? -ne 0 ]", and if the redirect leaks to my console and also the leaked output indicates that the build failed ("make: Error" and so on), then my $? test fails (i.e., it thinks that $? == 0, whereas a failed make should return a non-zero value). It's as if my original script can't "see" the results from child commands executed from later scripts.
I am again struggling to make a script work, but hey, it is fun, I am learning new things. I discovered the set -x option which was, for me, like the second coming. Still, what I am not able to do is redirect ALL output to a (log) file, including what is produced by the -x setting. Let's assume a very simple script: Code: #!/bin/bash set -x source="/home/atelier/Bureau/" ls -la $source and I am running it as . test.sh >> /var/log/test.rmcb.log
The result of ls goes inded into the log file, but the rest still shows on the console where I am running the script: Code: ++ source=/home/atelier/Bureau/ ++ ls --color=auto -la /home/atelier/Bureau/ Is there a way to redirect EVERYTHING to the log file ?
if I'm posting to the wrong forum. Be so kind to tell me where to better ask this question, as I'm really not finding the right words to google for.So, I have a shell application (fdb) which is a Flash debugger. I want to run it using bash script, capture it's output and pass it the commands (it can read from STDIN). The reason I want to do so is that Flash Builder (the IDE for Flash development) is plain stupid when it comes to compilation, and it won't allow me to compile any file in the project... so, I found out that I can make Eclipse to run an external tool. This external tool is my *.sh file whichches the compiler, and then it launches the debugger.The Eclipse console can display the compilation results, or errors. When I run the debugger it can even pass the input from Eclipse console to the debugger, however, the output from the debugger isn't shown.
I would really like to capture the output of scp and my file's progress. Scp updates the transfer rate every 1 second, and I will like to save the transfer rate at every update. So for example, if the file transfer takes 30 seconds, I would like 30 reports of the transfer rate.
The output looks like: Code: file.dat 1% 3664KB 938.5KB/s 05:48
Whenever I try a simple redirect like: Code: scp file.dat 192.168.1.100:~/ &> output ... it does not save the rate at every update, it only shows the final rate.
If I try using typescript by starting "script" ... it's the same deal.
I want to redirect the output of a command to a file, but not at the end of the file, but after a line. Do you know how can I do it?
Something like:
cat file_a | grep some_text >> resulting_file
# in this file I need to place the output from grep, but not at the bottom of resulting_file, like it would normally happen, but after line .. 3 , for example
I'd like to redirect the output to a file and to the console. I know about tee but the issue is that it waits until the first process finishes.e.gecho "hello world" | tee test.txtfirst calls echo and then tee.Is there a way to redirect "on the fly" ?
I was trying to redirect the output of two variables to different columns of a .csv file in MS excel like this,
Code: echo "$a $b" > abc.csv But I am getting both $a and $b in the same column, is there anything I can use instead of to move the value of $b to the next column? Or is there a good different approach to do it?
I booted to command line only and entered the following command: Sudo Xorg -configure > xorglog.txt
the command seems to run just fine and does create a new xorg.conf.new file but I would like to see all the output of the Xorg -configure command but it just scrolls by too fast and I can't go back to see it. Hence this is why I'm trying to do the > . It seems to ignore the >.
I have a python script that when run outputs to screen.
eg. ./international_sms_check.py 0403000511 919227434827 TS 21 check ok TS 22 check ok sms successfully delivered from 61403000511 to 919227434827 But when I try:./international_sms_check.py 0403000511 919227434827 > test
The file test is created but there is nothing in it.if I try ls > test this works fine with output of ls redirected to file test.
I'm working on some scheduled task script files to keep nightly backups of some of our database information in place, and it's a bit annoying when they blow up. I know how to redirect stdout and stderr to a flat file I can view when I come in, and I know that 2>&1 maps them both to the same file (whatever was named in 1). However, I'm running into some cron-time situations where it's easier to have the two streams together, and other cron-time situations where it's easier to have them separated. I can't really tell which is going to happen; is there some way I could create both kinds of output file for my scripts, so that I've got a std_err only file and an interleaved std_out/std_err file?
Note: I've looked at the 'tee' command, but I don't think it will work for what I'm after. 'tee' appears to only work with stdout; I'm trying to work with stderr.
Is there one command that will let me record an entire terminal session (with any possible errors) to a text file while also seeing all output on screen too? I know it can be done for individual commands, but I'm looking to do this for an entire session where the individual commands will be normal (i.e., not piped into tee, etc.). It would be even better if the command prompt is captured too. The obvious utility of this makes me think someone surely has come up with a solution long ago (probably in the 60's).(I'm sure it goes without saying, but subsequent output in that session should be appended to the file. The file should contain the full history, with all output and errors, of the session.)
I have this script in the past for csh: Code: ./a.out |& tee prints.txt which will redirect all printfs in the C program to the prints.txt file and at the same time show them in the console. How do you do this in bash? I have seen this, [URL] but it does not work for my bash and sh shells. It says:
Code: -bash: syntax error near unexpected token `&' and Code: -sh: syntax error: unexpected "&"
I have a script that generates a bunch of output, including the expansions details provided by: set -v -xI am trying to pipe everything that is displayed to a file, in addition to displaying it on the screen. I've managed to get stderr and stdout into the file, but the expansions are only printed to the screen. Here is what I have so far:sudo -u <user> source my_job.sh |tee my_log.txt 2>&1
So that when I grep on the local file again later, it can be printed out with original log lines. Otherwise, the log lines will be dropped and lines becomes concatenated into a single line, e.g., if I rewrite the script in this way, echoing the $result is not a good idea..
is there some workaround that I can save it to a variable rather than file but still keep the eol? That will simplify my script and don't need to do all those I/Os!
I have a bash script that calls a java class method. The method returns a string to the linux console when run independently. how can I assign the value from the java method to a variable in a bash script?running the script: java -cp /opt/my_dir/class.method [parameter]
PU12829,24869;PD15733,24869;PD15733,19785;PD12829,19785;PD12829,24869; PU4599,20915;PD9924,20915;PD9924,18898;PD4599,18898;PD4599,20915; PU12829,24869;PD15733,24869;PD15733,19785;PD12829,19785;PD12829,24869; PU4599,20915;PD9924,20915;PD9924,18898;PD4599,18898;PD4599,20915; PU1723,3423; #this line is ignored to short
[Code]...
What I'm trying to do is while true, cut each line from file that begins with PU and thats longer than 12 characters and write to a increasing numbered file for each line. Stating with object1 etc.
I am trying to process a column separated data file, with a few bash command. For example, I have
Code:
file1 aaaa yes file2 aaaa no file3 bbbb yes
Let say I want to create new file with the output of first column and do something else with the output of 3rd column. Of course there are many ways to process this data file, but I wish to know by using awk, how could I do it. I'm trying:
Code:
awk '{system("touch $1")}' datafile
but the shell command will not able to get the awk '$1' output. How do I get this done ? And for another question, if the data file contains the variable name of a shell variable, how could I make use of it during a awk output ? For example I have a datafile1:
Code:
server1 yes server2 no
And in another server declaration data file, I got this datafile2:
Code:
server1=xxx1 server2=yyy1
And in my awk script, I want to achieve something like (the syntax is definitely wrong, just to demonstrate what I assume it will like):
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.
I am not sure if that Subject really explains it, basically I have a script that executes a CLI java-applet that requires a passphrase from the user. I can easily execute this by issuing the -p argument followed by the passphrase however that shows up on possible logs or at least on the results of the ' ps ' command. If you do not supply this -p argument it provides a new line with the echo " Enter Passphrase: " and asks for input.
how can I provide a result/input for the Passphrase request and is it still possible to throw this application in the background with the ' & ' following the command? I have seen a few examples that show a /bin/expect that expects a result and sends a command however I would like to refrain from any extra dependencies. Example of Regular Execution of application: