I have a process which logs output to log.txt. If I want to see the process's status in real-time, is there a way to echo that output to stdout instead of opening the log in a text editor and constantly reloading?
I want to keep a trace of the URL I visit, so I use a command line like this:
tcpdump -ien1 -v -X 'tcp port 80' | sed -nl 's/^.0x[0-9a-f]{4}:.{43}(.)$/1/p' |perl break.pl |perl -pe 's/(GET|POST).(.*?).HTTP/1....Host:.([a-zA-Z._0-9-]*)../" BEGURL
[Code]....
I also tried redirecting stdout and stderr to /tmp/out, it's still empty. The file has write access. I have no idea what it can be. Is there anything else than stdout and stderr?
Am having issues getting the output from a script to be logged in a file. I need the script to output both the stderr and stdout to the same text file.
I faced a issue with updating a file contents with echo command which fails with error as below: echo "foo" > bar //to create a file named "bar" echo "foobar" > bar //to edit its contents
The latter fails, it prompts "File exists" i.e. ~>echo "foo" > bar ~>echo "foobar" > bar bar: File exists. ~>cat bar foo ~>
I have a script where I want to redirect stdout to the terminal and also to a log file aswell as redirecting stderr to the same log file but not the terminal.I have the following code which I found on the net which redirects both stderr and stdout to a file and the logfile,
I cannot redirect output from commands such as iptables, iptables-save, and ifconfig. For example, any of the following DOESN'T work ( as root ):
Code: iptables > tmp iptables-save > tmp ifconfig > tmp The file tmp is ALWAYS blank, that is, 0 bytes in size. Wackier things DO work, such as:
Code: echo "`iptables-save`" > tmp iptables-save | tee tmp Other commands like:
Code: ls > tmp DO work as expected.
Note that this problem happens regardless if I log-in remotely via ssh or locally on the computer in question. I am clueless as to what is causing this. Any ideas?The box is running 2.6.25-14.fc9.i686 and boots to runlevel 3. The modifications I've made to the box since installing the OS are things like compiling/installing latest OpenSSH,OpenSSL,httpd,BerkeleyDB,subversion,zlib etc -- nothing really out of the ordinary I'd say.
In this example, why does blacklist end up in the file blacklist and $a end up in stdout?
[code]...
The desired result is to have a file containing the results of lsmod which had the first word on the line beginning with snd_ copied into another file preceded by the word blacklist.
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.
I have a command line server that logs to stdout, which I start along the lines of ./server > log.txt
What I want to do is limit the size of log.txt, without modifying the server.
I am assuming there must be some kind of tool already that lets me do this, something like where I can pass in my server, the output file and a size limit? If so, can anyone enlighten me?
I'm writing a script to execute bash commands in the PHP CLI. I would like to suppress errors from bash and write my own error message if an error occurs. So far I have this (assuming log.txt doesn't exist!):
Code:
tac log.txt 2>/dev/null
Which works as expected, tac kicks up an error but the error is suppressed, but when I use this:
Code:
tac < log.txt 2>/dev/null
I get:
Code:
bash: log.txt: No such file or directory
The tac error is suppressed but bash still gives me a dirty error.
I have a Linux program which can write information to stdout and stderr.
I have a shell script which redirects that output to a file in /var/log. (Via >> and 2>&1.)
Is there a way to make that log file rotate? (max size, then switch to a different file, keep only a limited number of files)
I've seen a few answers which talk about the logrotate program, which sounds good, but they also seem to be focused on programs which are generating log files internally and handle HUP signals. Is there a way to make this work with a basic output redirection script?
It works if I use a regular file instead, so why doesn't it print to stdout when I use /dev/fd/1? I do this with other applications don't have an option to write to stdout and it works, so what does GNU/Screen do that makes it not work?
I have a starServer.sh command in a shell script along with a bunch of command. Th startServer.sh command prints out stuff on stdout and stays printing since the server is up. However eve though I want to start the server I want it to continue executing the commands after ./startServer.sh in the same flow.
I have several commands in a bash script, and in the middle of the script there are several commands whose output and error streams I want to redirect to a file. I think I could simply add '>> myfile.txt' to the end of every command, but is there a way to set it before that block of commands, then reset the streams to their original state at the end of that block?
This works as expected, but I don't see neither the input nor the app output. The application is an interactive prompt written in C. When I interact manually with it, I see the prompt itself and responses to my input, but when I execute the aforementioned script I see nothing. I would like it to print the input and the output as if a real user was typing. Do you know how to achieve that?
I'd like a function in my .bashrc file that would allow me to pass text to it and echo the text to a specified file. I know it's simple as "echo 'text' >> file," but ideally, I would want to alias the function so I execute something like:
Code: user~ $ write 'this is a test' with "write" being the function, and 'this is a test' being echoed to the file. I hope I explained that well enough.
I would like to append text to a file. so i wrote in bashecho text >> file.confHowever it doesnt leave a new line. So i can only do this once. How do i add a new line?
I'm trying to create a shell script to take an argument and use it to name a terminal tab. So if the script's name is tabnm, tabnm "test" should rename the current tab "test"
This is my code:
#!/bin/sh echo -ne "e]1;$1a"
but when i run it I get this output:
robin@icarus $ sh tabnm.sh test -ne e]1;test
If I just run echo -ne "e]1;Testa" straight in the shell, the tab is renamed.
I want to prevent "^C" from echoing when Ctrl-C is pressed. I did "stty -echoctl" which some googling results suggested. Now it echos raw Ctrl-C characters instead of the string "^C". That's not any better since it displays some funny blocked hexadecimal in the terminal window.
Would like to know how to turn "echo" off in a shell scripting. I wrote a shell script, testing a condition, after the condition tested. On the other line I used the echo Command to echo a line, then on the other line I used the "read" command to read an input typed. The crux here is the string or line inputed is what I would like to turn off. Distro is redhat linux.
We're going to be doing a rather large server deployment, and using the provisioning system we have in place there is no current way to just "copy" a file over to the servers. All files/scripts have to be run from the provisioning server.Due to network constraints, the provisioning system can't run a script we need to run (requires certain network assets to complete, but as soon as we modify the network settingshe provisioning system loses access to the server and can't run the script). So,our network configuration script to create the other script on the server in /root when it runs.My original method was to do something along the lines of: