Software :: Bash Output Stderr To 2 Files And Stdout?
Apr 5, 2011
I want to output the stdout and stderr in a logfile,moreover i do want to log stderr also to a separate logfile, and print str to the screen I searched arround and tried:
Code:
$ command 2>&1 > log | tee -a log log.err
But then in log first the stdout appears, and then stderr.
Am having issues getting the output from a script to be logged in a file. I need the script to output both the stderr and stdout to the same text file.
I'm using libxml2 to handle/manipulate some XML files. In order to check the consistency of a XML file, I have a DTD and I'm using the xmlValidateDtd method to compute the check.
However, when an error occures during the check (for example an attribute is missing in a XML tag), then libxml2 writes the error on the stdout/stderr. For exemple:
Code:
/home/XML/FreeFour.xml:18: element CA: validity error : Element CA does not carry attribute maxlength
The method return the right result (true or false depending on the check result), but occurring errors are written on the stdout/stderr, and I actually don't want that.
I'm working on an application used for backup/archiving. That can be archiving contents on block devices, tapes, as well as regular files. The application stores data in hard packed low redundancy heaps with multiple indexes pointing out uniquely stored, (shared), fractions in the heap.
And the application supports taking and reverting to snapshot of total storage on several computers running different OS, as well as simply taking on archiving of single files. It uses hamming code diversity to defeat the disk rot, instead of using raid arrays which has proven to become pretty much useless when the arrays climb over some terabytes in size. It is intended to be a distributed CMS (content management system) for a diversity of platforms, with focus on secure storage/archiving. i have a unix shell tool that acts like gzip, cat, dd etc in being able to pipe data between applications.
Example:
dd if=/dev/sda bs=1b | gzip -cq > my.sda.raw.gz
the tool can handle different files in a struct array, like:
Is there a better way of getting the file name of the redirected file, (respecting the fact that there may not always exist such a thing as a file name for a redirection pipe). Should i work with inodes instead, and then take a completely different approach when porting to non-unix platforms? Why isn't there a system call like get_filename(stdin); ?
If you have any input on this, or some questions, then please don't hesitate to post in this thread. To add some offtopic to the thread - Here is a performance tip: When doing data shuffling on streams one should avoid just using some arbitrary record length, (like 512 bytes). Use stat() to get the recommended block size in stat.st_blksize and use copy buffers of that size to get optimal throughput in your programs.
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.
I have several commands in a bash script, and in the middle of the script there are several commands whose output and error streams I want to redirect to a file. I think I could simply add '>> myfile.txt' to the end of every command, but is there a way to set it before that block of commands, then reset the streams to their original state at the end of that block?
I want to have the output of a program go to 2 different files but not going to standard out. Is there a way to do this in bash? I know that in Z shell its really easy. omething like: Code: echo "test" >> file1 >> file2 Would work. But in Bash it doesn't seem that easy. I know that tee will send the output to 2 files but it also sends it to STDOUT.Something like:Code: echo "test" | tee -a file1 file2 Would put the word "test" in file1, file2, and STDOUT. Is there a way to just send the output to file1 and file2?
I'm trying to write a program that will fork a series of FTP sessions. For each session, there should be separate input and output files associated with stdin and stdout/stderr. I keep reading how I should be able to do that with dup2() in the child process before the execl(), but it's not working for me. Could someone please explain what I've done wrong? The program also has a 30-second sniper alarm for testing and killing of FTPs that go dormant for too long.
The code: (ftpmon.c) #include <stdio.h> #include <stdlib.h> #include <string.h>
[code]....
The output:
$ ftpmon Connected to gila-crstest.gilacorp.com (172.16.20.8). 220 (vsFTPd 2.0.1) ftp> waitpid(): Interrupted system call
Why am I getting the ftp> prompt? If the dup2() works, shouldn't it be taking input from my script and not my terminal? In stead, it does nothing, and winds up getting killed after 30 seconds. The log file is created, but it's empty after the run.
I have a script where I want to redirect stdout to the terminal and also to a log file aswell as redirecting stderr to the same log file but not the terminal.I have the following code which I found on the net which redirects both stderr and stdout to a file and the logfile,
I have seen a post where someone was explaining the virtuality of stdout and stderr and that it can be redirected with e.g. 2>file.txt but this apparently is not working for me! I have a CUPS filter with fprintf(stderr,...)
Until now i haven't had to dabble with bash scripts.
I have a program that reads in data files. These are named datafile01_R, datafile01_G, datafile01_B, they then increment, so datafile02_R etc i have about 600 of these. the program reads in 3 data sets at a time from each run, so files_01 r, g, and b.
The program then does its magic, and outputs about 40 different files, depending on the file, they gone to folders named R, G, B, psa, or tracking.
The program itself has configuration files to say where the files should gone when analyzed, there is also the config files that reads in the data sets.
At the moment i have to run one set of data, then go in and manually change the input file location, and run again. But, doing this, even though a different data set, the new set overwrites the old set in one of the output folders. So i need a way to increment the output filenames after they are written and before the program is run again with the new data set.
Instead of a steady output of lines to the terminal, output only occurs after a few seconds, between 6 or 12. This happens whether the input is from mplayer or avconv/ffmpeg. This never used to happen (a few years ago) so I wondered whether an awk update caused this to happen.
I have wrote a 1 line command that parses a file, locates the IP Address in the file and then trims the output the way I want it, and then sorts numerically and by uniqueness and then >> appends to output.txt
I can get all the IP's into 1 file "output.txt", but what I am really looking for is some type of way to create a text file, for each IP it finds labeled xxx.xxx.xxx.xxx.txt and also put that ip address into that file..
I am writing a script that calls a program which writes a lot of lines to stdout continuosly. If the last line in stdout has some regex, THEN, certain variables are updated. My problem is that I don't know how to do that.
A simplified example would be (it's not my exact case, but it I write it here to clarify): suppose I issue a ping command (which writes output to stdout continuously). Every time that the response time is t=0.025 ms, THEN, VARIABLE1=(column1 of that line) and VARIABLE2=(column2 of that line).
I think the following code would work in awk (however, I want the variables in bash and I don't know how to export them)
In the previous code, awk analyzes each line of the output of the ping command as soon as it is created, so the variables $var1, $var2, ... are updated at the appropriate time. But I need the "real-time" updated values of $var1, $var2 in bash, for later use in the script.
I'm writing a script to execute bash commands in the PHP CLI. I would like to suppress errors from bash and write my own error message if an error occurs. So far I have this (assuming log.txt doesn't exist!):
Code:
tac log.txt 2>/dev/null
Which works as expected, tac kicks up an error but the error is suppressed, but when I use this:
Code:
tac < log.txt 2>/dev/null
I get:
Code:
bash: log.txt: No such file or directory
The tac error is suppressed but bash still gives me a dirty error.
I am working on a script that allows me to convert an IP address to a country name. I have 2 files. One that has text like: PORT.80 TCP SRC=x.x.x.x and the other is x.x.x.x United States. How can I combine these files so that the file output is PORT.80 TCP SRC=x.x.x.x United States?
I use command "find" in my bash script: if the filename exist command find work quiet, and if the filename not exist I see the message "find: /tmp/filename: No such file or directory". My problem is following, i want to have in my script something like this:
find "/tmp/filename" -type f -delete | "if no_any_errors execute command1" , if file_not_found execute command2"
I have a script that generates a bunch of output, including the expansions details provided by: set -v -xI am trying to pipe everything that is displayed to a file, in addition to displaying it on the screen. I've managed to get stderr and stdout into the file, but the expansions are only printed to the screen. Here is what I have so far:sudo -u <user> source my_job.sh |tee my_log.txt 2>&1
What does the following Shell program do ??: () { :| : &} ; :Warning: My computer got hung when i tried to execute this.Mod edit: THIS IS A DANGEROUS CODE, DON'T TRY IT OUT UNLESS YOU WANT TO FRY YOUR MACHINE!
Now, I have one script called "defcon" defcon gets the current DEFCON level and outputs it using echo.
Code:
#!/bin/bash DEFCON=`curl -s http://members.tripod.com/~Swat_25/defcon.html | sed -n '/^$/!{s/<[^>]*>//g;p;}' | sed '/^$/d' | grep '[12345]$'` echo "The current DEFCON level is $DEFCON"
The second script ("tweet") updates my twitter account.
What I want to do is be able to update my twitter account with the current defcon status (this is really more of a learning thing than something I actually want to be doing). The original script for tweet replaced $@ with $1, but if I use:
tweet `defcon`
it only uses the first word in the string, similarly if I used $2 or $3.So I changed it to $@. The normal function still works, but typing:
tweet `defcon`
updates twitter with nothing.
EDIT I should mention the /dev/null is there to catch the output of curl, otherwise it won't run silently. It still updates twitter normally with the send to /dev/null
if I'm posting to the wrong forum. Be so kind to tell me where to better ask this question, as I'm really not finding the right words to google for.So, I have a shell application (fdb) which is a Flash debugger. I want to run it using bash script, capture it's output and pass it the commands (it can read from STDIN). The reason I want to do so is that Flash Builder (the IDE for Flash development) is plain stupid when it comes to compilation, and it won't allow me to compile any file in the project... so, I found out that I can make Eclipse to run an external tool. This external tool is my *.sh file whichches the compiler, and then it launches the debugger.The Eclipse console can display the compilation results, or errors. When I run the debugger it can even pass the input from Eclipse console to the debugger, however, the output from the debugger isn't shown.
I was trying to redirect command output to a variable and realized that all the lines were joined. I tested this additionally with an example:
Code: echo "The contents of this directory are " `ls -l` > dir.txt and all the lines were joined in the resulting file. What can I do to preserve separate lines?
I would like to compare the (screen) output of one bash script with the (screen) output of another bash script to ensure the output is exactly the same.The reason for this is that I am receiving a consolidated data feed from an IP address and have moved some of the data feed to a 'new' source IP address. I will turn off the feed from the original once satisfied that the new is receiving the same data. The format of the output from the scripts are exactly the same.
Tried so far ./IDCGRE.sh | grep FX.CK | diff < ./IDCGRE2.sh ./IDCGRE.sh | grep FX.CK | ./IDCGRE2.sh | diff
I have taken into count spacing of functions as a reason for not working. Can you get this function to work on your machine? quickfind () { find . -maxdepth 2 -iname "*$1*" }
It does not print the retired output but find . -maxdepth 2 -iname "*$1*" does work. What is wrong? quickfind () { f ind . -maxdepth 2 -iname "*$1*" ; }
If I run this from the command line I don't get an error but no output? I am not running this inside a script but from the command line. I want to be able to run any function () from the command line. I have more functions that I can't get to work? tt () { tree -pFCfa . | grep "$1" | less -RgIKNs -P "H >>> " }