Programming :: Redirecting Stdin / Stdout And Stderr And Finding Files Name And Other Stats?
Aug 8, 2010
I'm working on an application used for backup/archiving. That can be archiving contents on block devices, tapes, as well as regular files. The application stores data in hard packed low redundancy heaps with multiple indexes pointing out uniquely stored, (shared), fractions in the heap.
And the application supports taking and reverting to snapshot of total storage on several computers running different OS, as well as simply taking on archiving of single files. It uses hamming code diversity to defeat the disk rot, instead of using raid arrays which has proven to become pretty much useless when the arrays climb over some terabytes in size. It is intended to be a distributed CMS (content management system) for a diversity of platforms, with focus on secure storage/archiving. i have a unix shell tool that acts like gzip, cat, dd etc in being able to pipe data between applications.
Example:
dd if=/dev/sda bs=1b | gzip -cq > my.sda.raw.gz
the tool can handle different files in a struct array, like:
Is there a better way of getting the file name of the redirected file, (respecting the fact that there may not always exist such a thing as a file name for a redirection pipe).
Should i work with inodes instead, and then take a completely different approach when porting to non-unix platforms? Why isn't there a system call like get_filename(stdin); ?
If you have any input on this, or some questions, then please don't hesitate to post in this thread. To add some offtopic to the thread - Here is a performance tip: When doing data shuffling on streams one should avoid just using some arbitrary record length, (like 512 bytes). Use stat() to get the recommended block size in stat.st_blksize and use copy buffers of that size to get optimal throughput in your programs.
I'm trying to write a program that will fork a series of FTP sessions. For each session, there should be separate input and output files associated with stdin and stdout/stderr. I keep reading how I should be able to do that with dup2() in the child process before the execl(), but it's not working for me. Could someone please explain what I've done wrong? The program also has a 30-second sniper alarm for testing and killing of FTPs that go dormant for too long.
The code: (ftpmon.c) #include <stdio.h> #include <stdlib.h> #include <string.h>
[code]....
The output:
$ ftpmon Connected to gila-crstest.gilacorp.com (172.16.20.8). 220 (vsFTPd 2.0.1) ftp> waitpid(): Interrupted system call
Why am I getting the ftp> prompt? If the dup2() works, shouldn't it be taking input from my script and not my terminal? In stead, it does nothing, and winds up getting killed after 30 seconds. The log file is created, but it's empty after the run.
I have a script where I want to redirect stdout to the terminal and also to a log file aswell as redirecting stderr to the same log file but not the terminal.I have the following code which I found on the net which redirects both stderr and stdout to a file and the logfile,
i'm trying to redirect the output of a command to the input of the next command. not sure if i'm going about this the right way. an easy method would be just to store the output of the previous command in a file and redirect input to read that file, but i'm curious to see if this can be done without writing to any files.
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.
I want to output the stdout and stderr in a logfile,moreover i do want to log stderr also to a separate logfile, and print str to the screen I searched arround and tried:
Code:
$ command 2>&1 > log | tee -a log log.err
But then in log first the stdout appears, and then stderr.
I'm using libxml2 to handle/manipulate some XML files. In order to check the consistency of a XML file, I have a DTD and I'm using the xmlValidateDtd method to compute the check.
However, when an error occures during the check (for example an attribute is missing in a XML tag), then libxml2 writes the error on the stdout/stderr. For exemple:
Code:
/home/XML/FreeFour.xml:18: element CA: validity error : Element CA does not carry attribute maxlength
The method return the right result (true or false depending on the check result), but occurring errors are written on the stdout/stderr, and I actually don't want that.
I'm writing a script to execute bash commands in the PHP CLI. I would like to suppress errors from bash and write my own error message if an error occurs. So far I have this (assuming log.txt doesn't exist!):
Code:
tac log.txt 2>/dev/null
Which works as expected, tac kicks up an error but the error is suppressed, but when I use this:
Code:
tac < log.txt 2>/dev/null
I get:
Code:
bash: log.txt: No such file or directory
The tac error is suppressed but bash still gives me a dirty error.
I have seen a post where someone was explaining the virtuality of stdout and stderr and that it can be redirected with e.g. 2>file.txt but this apparently is not working for me! I have a CUPS filter with fprintf(stderr,...)
Am having issues getting the output from a script to be logged in a file. I need the script to output both the stderr and stdout to the same text file.
I have several commands in a bash script, and in the middle of the script there are several commands whose output and error streams I want to redirect to a file. I think I could simply add '>> myfile.txt' to the end of every command, but is there a way to set it before that block of commands, then reset the streams to their original state at the end of that block?
I have a command line server that logs to stdout, which I start along the lines of ./server > log.txt
What I want to do is limit the size of log.txt, without modifying the server.
I am assuming there must be some kind of tool already that lets me do this, something like where I can pass in my server, the output file and a size limit? If so, can anyone enlighten me?
When iwconfig is redirected thus Code: iwconfig >> wireless.txt things like Code: eth0 - Has no wireles extension are still outputted to the terminal (stdout)
I want to have the output of a program go to 2 different files but not going to standard out. Is there a way to do this in bash? I know that in Z shell its really easy. omething like: Code: echo "test" >> file1 >> file2 Would work. But in Bash it doesn't seem that easy. I know that tee will send the output to 2 files but it also sends it to STDOUT.Something like:Code: echo "test" | tee -a file1 file2 Would put the word "test" in file1, file2, and STDOUT. Is there a way to just send the output to file1 and file2?
I need to achieve a particular effect using bash's redirection facilities.I know that I can redirect a file to some program's standard input:[user@host]$ application < file.txtThe thing is, I'd like to know can I regain control of this program's input after the file content's have been passed to it. In other words, I'd like to run a command similar to the above, and then, instead of the termination of the application, I'd want it to wait for further commands from standard input (keyboard).
As I write this question, it occured to me that I could probably write another application (or a script), that would at first write some data to standard output and then act as echo, like:[user@host]$ stdin_proxy.sh | applicationWould it work, and is there any better way to do so? There are a bunch of Googleable tutorials covering this issue, but they all amount to one advice - "reopen the stdin after the file contents have been read".
I'm planing to write a bash script that will make some web stats reports and I'm stuck on beginning because I don't know how can I read a directory content, put everything in a variable, compare the variable value with current date and go further.More specific ...
I have /var/apache/log/. Here I have access logs on date ( like access.log.24.06.2010.gz and so on ).
How can I do to automatically zgrep (in a bash script) last day .gz ??
I once had a script that when run would find the first 800GB of files in a directory (including subdirectories) and write them to a file (ie: ./800gb.sh > manifest.txt).I used this to create manifests of 800GB worth of data from large directories in order to dump to tape (LTO4).I'm sure its gotta be a pretty simple script, but I am not very skilled at writing bash scripts.
at this point I want to redirect what I have in hand to a file but also ... fork? or split? whatever the term, to continue onward so that I can pipe the results further into wc -l or sort or programX. without having to re-loop through that huge log file.
Often in bash we read lines from stdin in a loop and implicitly discard the remaining stdin by terminating the loop. Is it possible to discard it without terminating the loop? It could lead to smaller code.
Here's an example which uses two loops and below is the same algorithm assuming unwanted stdin can be discarded
I have a small program that reads stdin from a pipe using fgets. Now fgets blocks for the first line but after that it will not block.
The code, my_echo.c - int main(int argc, char **argv) { char buf [2000] ; char* pc ; printf("hello ") ; while (1) { buf[0] = (char) 0 ; pc = fgets(buf, sizeof(buf), stdin); if (pc != NULL) printf("%s ",buf); } return 0; }
How its called * In terminal window 1: ./my_echo < my_fifo * In terminal window 2: echo "1234" > my_fifo * In terminal window 1: prints hello then 1234. * Checking with ksysguard or top shows that my_echo is consuming 40% of CPU time.
Adding a few printf's shows that the gets is not blocking and returns a null pointer. * In terminal window 2: echo "qwerty" > my_fifo * In terminal window 1 qwerty prints. I want a read function that does in fact block so my program does not tie up CPU time, read does not block.
I need to write a script which will get a number from STDIN and then with that number echo a set number of questions (its for a firewall config). Heres what I want the user to receive.
I've given the output from `df` on AWK's stdin. But what I wonder is if there's a way to get AWK to run `df` itself, produce the same output, and exit? Doesn't seem to be that simple. Here's some examples: Works for some reason, and I think only in Bash (nothing is required in the $()
[code]...
does AWK absolutely need *something* on stdin, before it begins to process the data? Can it be made to open a file or stream internally, act upon that as though it were the stdin, and exit?
I'm trying to write a shell script that do ftp and download file periodically, this script should be called by a daemon running in the background.
the shell script "script.sh" is as follows:
Code: yafc ftp://test:test@192.168.1.225:21 < commands and the "commands" files is
Code: d Root/md5* / quit
if I run script.sh it will work just fine. But when the daemon software calls the "script.sh", the script will send ftp login request to the ftp server, but will not even answer the username or anything.
I believe it is something about child process redirection, but I don't know how to deal with it.
This problem is not only with yafc, it is the same with any ftp client or any application like telnet and so.
I want to put a keystroke into another virtual terminal's stdin.A simple 'echo p > /dev/tty7' causes a p to appear on the console of tty7 but not the app running in tty7 to respond as though a p key had been pressed. Per the instructions of a fellow in the software forum I tried using an ioctl, but that returned a permissions error, even when I made the target VT's permissions 777 (and it had the same owner).
In this example, why does blacklist end up in the file blacklist and $a end up in stdout?
[code]...
The desired result is to have a file containing the results of lsmod which had the first word on the line beginning with snd_ copied into another file preceded by the word blacklist.