General :: Able To Write On Stdout But Not In A File
Aug 16, 2011
I want to keep a trace of the URL I visit, so I use a command line like this:
tcpdump -ien1 -v -X 'tcp port 80' | sed -nl
's/^.0x[0-9a-f]{4}:.{43}(.)$/1/p' |perl break.pl |perl -pe
's/(GET|POST).(.*?).HTTP/1....Host:.([a-zA-Z._0-9-]*)../"
BEGURL
[Code]....
I also tried redirecting stdout and stderr to /tmp/out, it's still empty. The file has write access. I have no idea what it can be. Is there anything else than stdout and stderr?
I have a process which logs output to log.txt. If I want to see the process's status in real-time, is there a way to echo that output to stdout instead of opening the log in a text editor and constantly reloading?
Am having issues getting the output from a script to be logged in a file. I need the script to output both the stderr and stdout to the same text file.
I have a script where I want to redirect stdout to the terminal and also to a log file aswell as redirecting stderr to the same log file but not the terminal.I have the following code which I found on the net which redirects both stderr and stdout to a file and the logfile,
I cannot redirect output from commands such as iptables, iptables-save, and ifconfig. For example, any of the following DOESN'T work ( as root ):
Code: iptables > tmp iptables-save > tmp ifconfig > tmp The file tmp is ALWAYS blank, that is, 0 bytes in size. Wackier things DO work, such as:
Code: echo "`iptables-save`" > tmp iptables-save | tee tmp Other commands like:
Code: ls > tmp DO work as expected.
Note that this problem happens regardless if I log-in remotely via ssh or locally on the computer in question. I am clueless as to what is causing this. Any ideas?The box is running 2.6.25-14.fc9.i686 and boots to runlevel 3. The modifications I've made to the box since installing the OS are things like compiling/installing latest OpenSSH,OpenSSL,httpd,BerkeleyDB,subversion,zlib etc -- nothing really out of the ordinary I'd say.
In this example, why does blacklist end up in the file blacklist and $a end up in stdout?
[code]...
The desired result is to have a file containing the results of lsmod which had the first word on the line beginning with snd_ copied into another file preceded by the word blacklist.
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.
I have a command line server that logs to stdout, which I start along the lines of ./server > log.txt
What I want to do is limit the size of log.txt, without modifying the server.
I am assuming there must be some kind of tool already that lets me do this, something like where I can pass in my server, the output file and a size limit? If so, can anyone enlighten me?
I'm writing a script to execute bash commands in the PHP CLI. I would like to suppress errors from bash and write my own error message if an error occurs. So far I have this (assuming log.txt doesn't exist!):
Code:
tac log.txt 2>/dev/null
Which works as expected, tac kicks up an error but the error is suppressed, but when I use this:
Code:
tac < log.txt 2>/dev/null
I get:
Code:
bash: log.txt: No such file or directory
The tac error is suppressed but bash still gives me a dirty error.
When I ls -l /etc/passwd, -rw-r--r-- 1 root root /etc/passwd When I login as myself, and rm /etc/passwd, it asks: rm: remove write-protected file '/etc/passwd'? If I say yes, will it actually delete the passwd file?
I have a Linux program which can write information to stdout and stderr.
I have a shell script which redirects that output to a file in /var/log. (Via >> and 2>&1.)
Is there a way to make that log file rotate? (max size, then switch to a different file, keep only a limited number of files)
I've seen a few answers which talk about the logrotate program, which sounds good, but they also seem to be focused on programs which are generating log files internally and handle HUP signals. Is there a way to make this work with a basic output redirection script?
I'm writing a script/plugin for Nagios for testing a WebLogic server.. I redirect some output to a file, and then i read that file to get some data, but i can't seem to write to that file with my script :s... this is the most important code
[Code]...
* EDIT * When i execute this script through a local terminal (PuTTy), it works but when i execute it from Nagios, it doesn't work..
I want to write a shell script file for the below subject
subject / situation : i have many users say user1, user2, user3, user4 and so on... within my /home dir
Within a user dir say.. /home/user1 i have many unwanted files. these unwanted files start with the name core for eg. core2324, core9789, core 9079 etc.. i need to delete them.
I want to write an automated script for this, which can do the same. How to write a script which can delte these unwanted core files which exist in all user dirs.
What are the possible problem when Windows access the file from Ubuntu got Read Only even though have a full permission to read, write and execute the file? Ubuntu to Ubuntu accessing the file there is no problem only Windows got a problem.
It works if I use a regular file instead, so why doesn't it print to stdout when I use /dev/fd/1? I do this with other applications don't have an option to write to stdout and it works, so what does GNU/Screen do that makes it not work?
I have a starServer.sh command in a shell script along with a bunch of command. Th startServer.sh command prints out stuff on stdout and stays printing since the server is up. However eve though I want to start the server I want it to continue executing the commands after ./startServer.sh in the same flow.
I have several commands in a bash script, and in the middle of the script there are several commands whose output and error streams I want to redirect to a file. I think I could simply add '>> myfile.txt' to the end of every command, but is there a way to set it before that block of commands, then reset the streams to their original state at the end of that block?
I am trying to write a C++ Code to read write a XML file in C++.I researched a lot and find xerces is used for that but I am not able to write the code for that.Please provide me some links on how to run a code that R/W a xml file in C++.
I need to copy a file into a Flash memory which is connected to my computer via USB. The file must start at a specific sector. Can anyone guide me how to do this? (it can be through a C program, a line command, or any other way)
Debugging some of my scripts after upgrading from Debian Lenny to Ubuntu 10.04. In so doing, I tripped over this "problem," the solution to which may give me a clue to others.
On a bash shell command line I created a file thusly:
sudo touch zero_file
and it lists as expected with default permissions 0644:
I can place the command (minus the "sudo") in a script & run it under the auspices of sudo & it works. Am I missing something re the stdin redirection when using sudo?
How can we write a file and display in terminal at the same time. Like for example, when I do.. php -f file.php > testfile That should save right.. but I want to display it in terminal otherwise.