Software :: Qt Creater 4 - C++ & Stderr - Use QDebug() To Print Out Stderr Messages?
Jul 27, 2010Can I use qDebug() to print out stderr messages? If I just use qDebug() << stderr; I get hex output.
View 1 RepliesCan I use qDebug() to print out stderr messages? If I just use qDebug() << stderr; I get hex output.
View 1 Repliesthis might be an interesting one for the bash scripting gurus. I seem to break my teeth on it. The mission:- do a dd over ssh to trasnfer an image to another host- capture the dd PID on the other side- send a USR1 kill signal to it- capture that output on the original host It goes wrong on the last part. This is what I have before that step:dd if=image.gz | gzip -d | ssh host2 "dd of=/dev/vg1/lv1" &PID=`ssh host2 ps aux |grep dd|grep lx05|awk '{ print $2 }'`when I do "ssh host2 kill -USR1 $PIDI get nice outputs to the screen. When I replace the first line with:dd if=image.gz | gzip -d | ssh host2 "dd of=/dev/vg1/l01 2>/tmp/output.txt" &the dd command seemd to die. I suspect a problem with the pipe, since this does work when executing locally on a host without piping.
View 4 Replies View RelatedI have seen a post where someone was explaining the virtuality of stdout and stderr and that it can be redirected with e.g. 2>file.txt but this apparently is not working for me!
I have a CUPS filter with fprintf(stderr,...)
i have this error when i tried to compil a program :
[Code]...
Is it possible to redirect stdout and stderr from one terminal say /dev/pts/2 to another /dev/pts/3?
I tried the following:
Code:
/dev/pts/2 2>&1 /dev/pts/3&
Then when I run a command the process stops.
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.
View 2 Replies View RelatedComing from Debian Sid and KDE, I am used to K3b and the problems during verify. Now with Fedora 11 and Brasero I'm also getting errors:
BraseroReadom stderr: Error trying to open /dev/sr0 exclusively (Device or resource busy)... giving up.
BraseroReadom stderr: WARNING: /dev/sr0 seems to be mounted!
BraseroReadom stderr: readom: Device or resource busy. Cannot open '/dev/sr0'. Cannot open SCSI driver.
[code].....
After going back and adding my user to cdrom and other groups, changing authorizations et.al., I am still getting errors that I don't have permissions to use the drive. I've read through forums and bug reports and find out that my problem isn't unique. Like with K3B, does the failure during verify usually leave me with a good burn anyway? Does anyone know of a gui burner that doesn't have the verify problem? Or, should I go back to burning on cli with wodim?
I want to output the stdout and stderr in a logfile,moreover i do want to log stderr also to a separate logfile, and print str to the screen I searched arround and tried:
Code:
$ command 2>&1 > log | tee -a log log.err
But then in log first the stdout appears, and then stderr.
Code:
MY_STDOUT=`my-command`
MY_STDERR=`my-command >&2`
That is, i want to have to run my-command only once and get the same result. I've tried this:
Code:
YYY=$(XXX=`{ echo -n 111; echo 222 >&2; }` 2>&1); echo $XXX $YYY
where "{ echo -n 111; echo 222 >&2; }" is my-command. But it gives this output:
Code:
222
111
instead of "111 222". What's wrong in my script?
Am having issues getting the output from a script to be logged in a file. I need the script to output both the stderr and stdout to the same text file.
At present I have the following script:
Code:
#!/bin/bash
echo TR3_1 > printers.txt
snmpget -v 1 -c public 10.168.**.* SNMPv2-SMI::mib-2.43.10.2.1.4.1.1 &>> printers.txt
[Code].....
I have several commands in a bash script, and in the middle of the script there are several commands whose output and error streams I want to redirect to a file. I think I could simply add '>> myfile.txt' to the end of every command, but is there a way to set it before that block of commands, then reset the streams to their original state at the end of that block?
View 1 Replies View RelatedI'm trying to write a program that will fork a series of FTP sessions. For each session, there should be separate input and output files associated with stdin and stdout/stderr.
I keep reading how I should be able to do that with dup2() in the child process before the execl(), but it's not working for me. Could someone please explain what I've done wrong? The program also has a 30-second sniper alarm for testing and killing of FTPs that go dormant for too long.
The code: (ftpmon.c)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
[code]....
The output:
$ ftpmon
Connected to gila-crstest.gilacorp.com (172.16.20.8).
220 (vsFTPd 2.0.1)
ftp> waitpid(): Interrupted system call
Why am I getting the ftp> prompt? If the dup2() works, shouldn't it be taking input from my script and not my terminal? In stead, it does nothing, and winds up getting killed after 30 seconds. The log file is created, but it's empty after the run.
I'm working on an application used for backup/archiving. That can be archiving contents on block devices, tapes, as well as regular files. The application stores data in hard packed low redundancy heaps with multiple indexes pointing out uniquely stored, (shared), fractions in the heap.
And the application supports taking and reverting to snapshot of total storage on several computers running different OS, as well as simply taking on archiving of single files. It uses hamming code diversity to defeat the disk rot, instead of using raid arrays which has proven to become pretty much useless when the arrays climb over some terabytes in size. It is intended to be a distributed CMS (content management system) for a diversity of platforms, with focus on secure storage/archiving. i have a unix shell tool that acts like gzip, cat, dd etc in being able to pipe data between applications.
Example:
dd if=/dev/sda bs=1b | gzip -cq > my.sda.raw.gz
the tool can handle different files in a struct array, like:
Code:
enum FilesOpenStatusValue {
FileIsClosed = 0,
FileIsOpen,
[code]....
Is there a better way of getting the file name of the redirected file, (respecting the fact that there may not always exist such a thing as a file name for a redirection pipe).
Should i work with inodes instead, and then take a completely different approach when porting to non-unix platforms? Why isn't there a system call like get_filename(stdin); ?
If you have any input on this, or some questions, then please don't hesitate to post in this thread. To add some offtopic to the thread - Here is a performance tip: When doing data shuffling on streams one should avoid just using some arbitrary record length, (like 512 bytes). Use stat() to get the recommended block size in stat.st_blksize and use copy buffers of that size to get optimal throughput in your programs.
I'm using libxml2 to handle/manipulate some XML files. In order to check the consistency of a XML file, I have a DTD and I'm using the xmlValidateDtd method to compute the check.
However, when an error occures during the check (for example an attribute is missing in a XML tag), then libxml2 writes the error on the stdout/stderr. For exemple:
Code:
/home/XML/FreeFour.xml:18: element CA: validity error : Element CA does not carry attribute maxlength
The method return the right result (true or false depending on the check result), but occurring errors are written on the stdout/stderr, and I actually don't want that.
I use command "find" in my bash script: if the filename exist command find work quiet, and if the filename not exist I see the message "find: /tmp/filename: No such file or directory". My problem is following, i want to have in my script something like this:
find "/tmp/filename" -type f -delete | "if no_any_errors execute command1" , if file_not_found execute command2"
I have a script where I want to redirect stdout to the terminal and also to a log file aswell as redirecting stderr to the same log file but not the terminal.I have the following code which I found on the net which redirects both stderr and stdout to a file and the logfile,
Code: if [ -p $PIPE1 ]
then
rm $PIPE1
[code]...
It's kind of pointless imo for the types of errors that Exiv2 reports on to be written to a text file without some reference to the name of the file that prompted the error message in the first place. Is there a way to have bash identify the file that prompts the error and writes its name to the same file as the error (in my case, frencherrors.TX)?I've tried a painfully simple syntax that does something identical to a 2>&1 'suffix", namely frenchgentsfinder.sh 2 $file>>frencherrors.TX. It makes sense to me as written, but I'd like to know why I'm getting nothing on screen and everything directed to the file when what I want to see in the latter are the filenames causing the errors along with the text of the errors.
Is there another level one has to bore down into before they can garner this kind of output? If so, what is it and how does one invoke it in bash?
I reinstall my ubuntu 10.04 server yesterday. Before this server run Adempiere 360 Application The Setup complite, but when I run the application this error come out.
20:07:33,865 ERROR [STDERR] log4j:ERROR A "org.jboss.logging.appender.FileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
20:07:33,866 ERROR [STDERR] log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
20:07:33,866 ERROR [STDERR] log4j:ERROR [WebappClassLoader
delegate: false
repositories:
/WEB-INF/classes/
[Code]...
20:07:33,866 ERROR [STDERR] log4j:ERROR Could not instantiate appender named "FILE".
I have a syslog-ng running and kernel build of 2.6.34.8 I use a syslog API in my program with facility LOG_LOCAL5 and and levels debug err and crit and info. when I ran on the older syslog facility I had everything logged fine as I intended. now I have written these rules into the syslog-ng.conf:
options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
[code]....
the last two rules show my program gnssapp. the result is all debug levels or crit or err levels I don't see any of them !
More than 7 G bytes were logged to the messages file last three weeks I got this message in /var/log/messages I want to stop this messaging cause it takes to much space
Quote:
Apr 30 20:25:18 TEST-NODE kernel: IPT: IN_NOMATCH IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:17:a4:a7:3d:a2:08:00 SRC=172.26.16.27 DST=172.26.16.255 LEN=104 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=10100 DPT=10100 LEN=84
[code]...
I initially had a problem accessing the CUPS interface (see my other post) and got that resolved by adding the user "cupsys". Now, everything "looks" ok, and when I print a test page, it shows it as completed (in CUPS). However, the page never prints.
The printer is a Canon MP600 (using the canonmp600en.ppd file to configure it). Here's the output of my conf file.
Code:
# Show troubleshooting information in error_log.
LogLevel debug
[code]...
Interesting problem: For the first time with Xubuntu 10.10 64bit, I am finding certain applications print corrupt. A varying amount of letters / numbers either get substituted/print a blank space/ print a box etc etc. This corruption seems to happen from a Pdf ( evince ) or Spreadheet ( Gnumeric ) Opening the same Pdf on another machine (ubuntu 10.10 32bit) prints perfectly. Opening the same .xls file on the original computer but using OOo Calc prints perfectly.
I guess I have ruled out any problems with the printer itself or the network JetDirect box. I have re-installed CUPS and evince and upgraded to the latest version of HPLIP but the problem appears unchanged.
I cannot print pdf files. I have tried using okular and xpdf. The documents display in the program, but print preview shows a blank page. The printer then sends out blank pages. I have tried printing on 2 different printers using usb cables. Using terminal to process the commands shows error:
Quote:
xxx@xxx:~$ xpdf
***** MediaBox = ll:0,0 ur:611.976,791.968
***** CropBox = ll:0,0 ur:611.976,791.968
***** Rotate = 0
Segmentation fault
Converting to .ps also is blank.
My wife has a canon MP470 printer and running ubuntu 10.10. I am able to print black and white, but unable to print photos. I got it to work using another driver, but not the 'correct' one for this printer. I have searched a bit and don't see anything about ubuntu 10.10, just older versions. Or should I just network her to my printer....?
View 5 Replies View RelatedI'm running Kubuntu 10.10 with a Laserjet 5M printer. When I attempt to print a job from Firefox or from Okular, the job never gets onto the print queue. However, I can print test pages on the printer and also print from OpenOffice, so this seems to be app-dependent. I know the jobs aren't being queued because the job number doesn't increase (as shown by the jobs for the test pages). Both Firefox and Ocular give every indication that the print job has been processed correctly.
View 1 Replies View RelatedNot sure when CUPS started acting up. I have the latest 13.1 current software installed.The first page to print is always OK, but all succeeding pages are overwritten. The second page shows the first page on top of it, and the third shows the preceding pages on top of it - and so on. Has anyone else seen this problem?I guess the printer buffer is not getting flushed correctly. If my configurations were trashed in some way, I don't know where to look for a fix.
View 14 Replies View RelatedHow can I combine multiple single page prints into a single print job? For example, using Firefox on Linux one can print a web page such that each sheet of paper has four pages printed upon it. I would like to combine several separate web pages so that for example, web-page-a, web-page-b and web-page-c (each less than one print page long) are printed on a single sheet of paper.
I would like to do this without having to use some form of image editor to combine and manage manually created temporary files.
Just installed drivers for Lexmark pro200 - S500 series. However although it is recognised by the system and has a tick by its name, And tells me it is connected via usb file goes to print que but will not print. Also tells me it is Printing - localhost! is this correct?.
View 1 Replies View RelatedOur one remaining problem seems to be printing. She has an HP OfficeJet 6500 USB printer. We have the computer conntected. Strangely, when I boot from the CD the printer shows up as installed even though I did nothing to install it. After having submitted a print job it shows the printer status as "idle" and the print queue is empty. I tried deleting that printer and re-installing. The installation went as one would expect. However the results are the same. I'm beginning to think that somehow the problem is related to the fact that we are operating from the live CD. getting this thing to print from the live CD.
View 9 Replies View RelatedI have a debian system installed on my pc . I have just saved a text file on my desktop . Please let me know how can i print the file through comand prompt ? I need to learn the printing the file thru comamnd line .
View 1 Replies View Related