I have a server listening on incoming client connections. Once the client establishes SSL connection with the server, the server waits on read() from the client. Only Client can disconnect the connection. I want to have a timer in the server program to wait for x secs after read() and then disconnect the Client connection.
I wanted to know how can I set a period of time to a tcp connection to wait for request or respond for tcp block read. which system call or function I can use? Does any body know a very simple quick and easy reference on web for socket programing that has lots of socket programing examples in it?
I am writing a script that involves reading the content of a file present in a directory and/or its sub directory. I know readdir returns all the files & DIR names in a directory but how to check weather readdir is returning a file or a directory
I am trying to make a perl script which reads data from a file and parse it. The data in the file has the following syntax
Code: Device Physical Name : Not Visible Device Symmetrix Name : 1234 Device Serial ID : N/A Attached BCV Device : N/A Device Capacity
[Code]...
Each unique record starts with "Device Physical Name". So, I have a set of records within "Device Physical Name". I want to read this set of records starting from "Device Physical Name" and ends up till next "Device Physical Name". Offcourse FS is ":", and I just want to print/or later put info in a csv file.
it's been a while since I logged on here! I've been trying my hand at a little perl and have hit a brick wall.I'm using the Imagemagick module to manipulate some images. I can get the following to work without issue:
I just learn perl script.May i know how to simplify the code below especially in the red color part? i saw some examples in internet, they use "next" command.
Often in bash we read lines from stdin in a loop and implicitly discard the remaining stdin by terminating the loop. Is it possible to discard it without terminating the loop? It could lead to smaller code.
Here's an example which uses two loops and below is the same algorithm assuming unwanted stdin can be discarded
I have a small program that reads stdin from a pipe using fgets. Now fgets blocks for the first line but after that it will not block.
The code, my_echo.c - int main(int argc, char **argv) { char buf [2000] ; char* pc ; printf("hello ") ; while (1) { buf[0] = (char) 0 ; pc = fgets(buf, sizeof(buf), stdin); if (pc != NULL) printf("%s ",buf); } return 0; }
How its called * In terminal window 1: ./my_echo < my_fifo * In terminal window 2: echo "1234" > my_fifo * In terminal window 1: prints hello then 1234. * Checking with ksysguard or top shows that my_echo is consuming 40% of CPU time.
Adding a few printf's shows that the gets is not blocking and returns a null pointer. * In terminal window 2: echo "qwerty" > my_fifo * In terminal window 1 qwerty prints. I want a read function that does in fact block so my program does not tie up CPU time, read does not block.
I need to write a script which will get a number from STDIN and then with that number echo a set number of questions (its for a firewall config). Heres what I want the user to receive.
I've given the output from `df` on AWK's stdin. But what I wonder is if there's a way to get AWK to run `df` itself, produce the same output, and exit? Doesn't seem to be that simple. Here's some examples: Works for some reason, and I think only in Bash (nothing is required in the $()
[code]...
does AWK absolutely need *something* on stdin, before it begins to process the data? Can it be made to open a file or stream internally, act upon that as though it were the stdin, and exit?
I'm trying to write a shell script that do ftp and download file periodically, this script should be called by a daemon running in the background.
the shell script "script.sh" is as follows:
Code: yafc ftp://test:test@192.168.1.225:21 < commands and the "commands" files is
Code: d Root/md5* / quit
if I run script.sh it will work just fine. But when the daemon software calls the "script.sh", the script will send ftp login request to the ftp server, but will not even answer the username or anything.
I believe it is something about child process redirection, but I don't know how to deal with it.
This problem is not only with yafc, it is the same with any ftp client or any application like telnet and so.
I want to put a keystroke into another virtual terminal's stdin.A simple 'echo p > /dev/tty7' causes a p to appear on the console of tty7 but not the app running in tty7 to respond as though a p key had been pressed. Per the instructions of a fellow in the software forum I tried using an ioctl, but that returned a permissions error, even when I made the target VT's permissions 777 (and it had the same owner).
I'm trying to write a program that will fork a series of FTP sessions. For each session, there should be separate input and output files associated with stdin and stdout/stderr. I keep reading how I should be able to do that with dup2() in the child process before the execl(), but it's not working for me. Could someone please explain what I've done wrong? The program also has a 30-second sniper alarm for testing and killing of FTPs that go dormant for too long.
The code: (ftpmon.c) #include <stdio.h> #include <stdlib.h> #include <string.h>
[code]....
The output:
$ ftpmon Connected to gila-crstest.gilacorp.com (172.16.20.8). 220 (vsFTPd 2.0.1) ftp> waitpid(): Interrupted system call
Why am I getting the ftp> prompt? If the dup2() works, shouldn't it be taking input from my script and not my terminal? In stead, it does nothing, and winds up getting killed after 30 seconds. The log file is created, but it's empty after the run.
I'm working on an application used for backup/archiving. That can be archiving contents on block devices, tapes, as well as regular files. The application stores data in hard packed low redundancy heaps with multiple indexes pointing out uniquely stored, (shared), fractions in the heap.
And the application supports taking and reverting to snapshot of total storage on several computers running different OS, as well as simply taking on archiving of single files. It uses hamming code diversity to defeat the disk rot, instead of using raid arrays which has proven to become pretty much useless when the arrays climb over some terabytes in size. It is intended to be a distributed CMS (content management system) for a diversity of platforms, with focus on secure storage/archiving. i have a unix shell tool that acts like gzip, cat, dd etc in being able to pipe data between applications.
Example:
dd if=/dev/sda bs=1b | gzip -cq > my.sda.raw.gz
the tool can handle different files in a struct array, like:
Is there a better way of getting the file name of the redirected file, (respecting the fact that there may not always exist such a thing as a file name for a redirection pipe). Should i work with inodes instead, and then take a completely different approach when porting to non-unix platforms? Why isn't there a system call like get_filename(stdin); ?
If you have any input on this, or some questions, then please don't hesitate to post in this thread. To add some offtopic to the thread - Here is a performance tip: When doing data shuffling on streams one should avoid just using some arbitrary record length, (like 512 bytes). Use stat() to get the recommended block size in stat.st_blksize and use copy buffers of that size to get optimal throughput in your programs.
I'm writing a script to execute bash commands in the PHP CLI. I would like to suppress errors from bash and write my own error message if an error occurs. So far I have this (assuming log.txt doesn't exist!):
Code:
tac log.txt 2>/dev/null
Which works as expected, tac kicks up an error but the error is suppressed, but when I use this:
Code:
tac < log.txt 2>/dev/null
I get:
Code:
bash: log.txt: No such file or directory
The tac error is suppressed but bash still gives me a dirty error.
I am trying to learn C++.I implemented a simple archive program, and I am in a situation in which the user is prompted by a menu to make a choice.So I have some cout instruction to illustrate the possible choices and then
int choice; cin>>choice;
and everything works fine.I introduced this code in a "while" loop that checks wether the choice made by the user is valid or not:
bool check=true; int choice; while(check) { cin>>choice; if(the choice is valid) {...;check=false} else cout<<"please make an other choice" }
What is happening is that if by mistake the user introduces a character in place of a number, the loop repeats indefinitely because the program, when it get to the "cin" instruction, does not pauses to wait for a new input.
I'm writing a script that needs to spawn 2 or more processes and wait for their return status. I have a method to do this by waiting on each process individually like this:
I'm building a simple(?) socket server using threads to serve up a few requests. The spec is such that I have to listen to three ports at once, so I decided to use pthread to create three separate threads that would wait for connections, then spawn new threads to handle them.
The problem is that when I do this, for some reason the program never enters the wait loop and instead terminates (All three threads did get created since the messages get printed properly.) It gets to the line which prints "???", but not the line after the accept() call.I don't see an open port when I check for one either so I'm 99% sure they're terminating.Basically I have a main() method which has three calls to pthread_create, which should result in three threads being run that all wait for connections (listenOnPort). After each thread creation I print some info to make sure it's actually being created.
By the way, when I just run listenOnPortwithout threading, the server appears to enter the loop correctly and seems to be waiting for requests. It's only when I run the functions as threads that the problem seems to happen.The source is attached below. Any help will be appreciated. Much of the code is borrowed from a website (I can't post it because I am new here.) You need not worry about the handler_ methods because those are just methods that are run by the threads themselves.
Also--the original source was in C and I changed it to C++. Should I just use C? server.h Code: /* * server.h
I have a doubt about signals in C programming. I have done this little program to explain it. It creates a child process with fork and, when the child ends, receives the SIGCHLD signal and wait for its termination.Ok, quite easy, BUT when I execute this code the SIGCHLD signal is received twice, first as an error (returns -1) and the second one to finish the child process.I don't understand the meaning of the first received signal. Why is it generated? Is the code wrong? (if you add the SIGINT and press Ctrl+C during the execution it also receives two signals instead of one)
I just downloaded Tk-804.028 and try to install it (according to the README.linux) but I get:
> perl Makefile.PL /opt/ActivePerl-5.10/bin/perl-static is installed in /opt/ActivePerl-5.10/lib okay PPM for perl5.010001 Test Compiling config/perlrx.c Test Compiling config/pmop.c Test Compiling config/pregcomp2.c