Programming :: Communication Data Between Different Process
Nov 22, 2010I would like to send struct data from process A to process B but I dont known what is the best way. I have read about IPC, but there are a lot of ways to do it.
View 2 RepliesI would like to send struct data from process A to process B but I dont known what is the best way. I have read about IPC, but there are a lot of ways to do it.
View 2 RepliesI am searching for an interprocess communication that is platform and language independent and is two way communication among different processes and the most important thing is the ipc should have the ability to store data until the receiver receives it.
View 2 Replies View RelatedI wrote a serial port communication program to access a equipment.
int main(void)
{
int fd = 0;
int nread = 0,i = 0,nwrite = 0, tmpread = 0, m = 0, n = 0 ;
[code]....
I am trying to automate some directory naming when we're manually running some scripts and are using tee to direct the output to a file (log). Right now this is what we do
Code:
./some_script.sh 2>&1 | tee /home/user/some_dir/logs/manual/some_script_20110216_1628.log
As a matter of laziness and keeping the log files consistently named, I'd like to create a function to pipe it to so that it's doing all the naming How I envision the command running
Code:
./some_script.sh 2>&1 | myfunc
And what the logfile name should look like (and in the right directory)
Code:
some_script_20110216-1628.log
I was thinking of adding a function to our profile to handle this. Just in testing I was trying to stream line right on the command line, but I'm having some difficulty in getting the name of the script that is pushing data over the pipe. Here is what I've tried
Code:
./some_script.sh 2>&1 | tee $(cd ../logs/manual; pwd)/$0_$(date +%Y%m%d)-$(date +%H%M).log
but that created a file named
"bash_20110216-1628.log"
I have 3 processes to be executed in a particular sequence.
ProcessA
ProcessB
ProcessC
The requirement is that all the processes should run as background processes.
ProcessA talks to ProcessB and ProcessC using sockets.
ProcessB talsk to ProcessA only using sockets.
ProcessC talsk to ProcessA only using sockets.
[Code].....
i am using NCTUns simulator and having problem while receiving message from an OBU. sending a message works properly but recving creates problem and fails. i am using exactly the same function as used by the developers in the demos but still no luck. i copy the code attch the screen shots to have a look
sendto() and recvfrom() are used for message transfer. they both return >-1 if they are executed successfully. please have a look in the screenshots. agentClientReportStatus is the built in packet format which im using here whose fields i filled manually are in the code below.
agentClientReportStatus *message,*mssg;
int remainTime,n,n2,n3,i,sendingaddress,value;
sockaddr_in cli_addr;
timeval now;
[Code].....
I have been trying out serial and parallel data communication on Fedora 13 Beta. I can easily see the list of serial ports through:
Code:
setserial -g /dev/ttyS[0-3]
And can also write some data through:
Code:
ll > /dev/ttyS0
But I am unable to see parallel ports
I can see something like:
Code:
/dev/parport0
But when I try out:
ll > /dev/parport0
It throws error.
I have this JSON encoded data, which I want to efficiently parse (i.e. fast, with minimal system resources). I don't want to use a/the shell any more than necessary.
Here's a snippet of what the raw serial data looks like:
Code:
[{"num":1,"name":"1","visible":false,"focused":false,"rect":{"x":0,"y":0,"width":1680,"height":1050},...blah blah..
So that's fine. I currently have at least two ways of parsing it:
This Perl method (which I don't much care for because I'm not much into Perl, and because the output is not much more useful than the raw form, and imho even more cryptic):
Code:
sasha@reactor: <produce data> | perl -MData::Dumper -MJSON::XS -E 'say Dumper(decode_json <>)'
$VAR1 = [
{
code....
# Which means: at x=1680, draw a 20x20px grey block, then back up -10px and draw a "1" on the block.I figure that the two awk's I've used can be combined into one -- but I began having problems with removing the single quotes if I did it within the awk, which is why I stuck a `tr` in the middle.
I know this sucks. So, if someone has some ideas or thoughts on something else to do with this JSON data, that's less convoluted that where I'm currently going, I'll be happy to hear about it.
PS - I know this whole post is possibly hard to understand -- if you need more info (you mean you're interested in this mess?? )
I have been able to use bash to initiate a google seache via firefox. I would like to either copy the source page to a file via wget or send firefox short cuts to the other terminal's firefox search page to put the html file in a directory. I seem to remember but maybe incorrectly that there exist hexadecimal codes for each keyboard shortcut in firefox. Maybe these could be echo-ed from bash to the firefox search page.
View 2 Replies View RelatedI have two classes, for argument's sake A and B. A implements the core functionality, B is an encapsulated data structure. If you imagine this situation
[code]...
From within B's member functions, I would like to access the public function() in class A. This is not an inheritance issue, they are two discrete classes with radically different functionality. Class A makes an object of B.
i woul like to know if i could get help in writing a charater device driver for communication between two pc's in linux using RS-232 serial port.
View 5 Replies View Relatedhow am I supposed to use this guide. The doc states different messages lengths and formats, as a programmer, how should I utilize these information?Just fyi, my question is a general qns and does not necessarily target to just using netxms, but could be any other opensource as well.
View 4 Replies View RelatedI have an academic simulator software and I want to visualize its output at the same time the simulation is happening. However I want to separate visualization and simulation modules. The simulation data will be held in an array of a size around 0.5M and will be read only to visualization software (but updated regularly by simulator).
- In past I have used shared memory to share small variables among two applications.
- TCP/IP adds the option of having the simulator and visualization applications on separate machines but the implementation will be more difficult.
- I have also thought about an abstraction layer which allows to replace the communication/interconnection layer with other methods later (file/network/shared memory/pipe).
I have a shell script to identify whether the process is running or not. If the process is not running, then I execute another script file to run my application. Below is my script and saved this script as monitorprocess.sh Code: #!/bin/bash
result=$(ps -ef | grep -v grep | grep "applicationname.sh" | awk '{print $2}')
echo $result
if [ "$result" == "" ];
[code]...
Is there any difference in cpu usage for process in init.rc(runs automatic when boot is happened) and manually running process. Will these both have same priority by default...?
View 1 Replies View RelatedI tried googling but didn't get any answer for this.I have a process called "abc" and it is running with PID "123".I have a putty session opened with PID "999".I am giving kill -TERM 123 from putty session.My process "abc" before dying it should catch the PID of the terminal which provided TERM signal to it.Is there any way to find this out
View 2 Replies View RelatedI am using read() in c++ to get data from a serial port. However, if no data is available on the serial port the function blocks until dta arrives.Example code:
//------------------------------------------------------------
char m_readBuffer[255] = {0};
char* p_curChar = m_readBuffer;
[code]...
I have some data files that should be distributed with my program. Using dist_pkgdata_DATA in Makefile.am, I get these files installed to /usr/local/data/share/package-name. The problem is that data is read-only, and my program needs to modify it. Playing with dist_sharedstate_DATA, dist_localstate_DATA, dist-data_DATA varibles, I got different installation directories, like /usr/local/com, usr/local/var, but data is always read-only.
How can I distribute modifiable data files with my package? I need some common directory for all users, or maybe local data in a user directory.
I have a shell script to identify whether the process is running or not. If the process is not running, then I execute another script file to run my application. Below is my script and saved this script as monitorprocess.sh
Code:
#!/bin/bash
result=$(ps -ef | grep -v grep | grep "applicationname.sh" | awk '{print $2}')
[code]...
forums so I'm not sure this is the right place for this topic.So, my question/topic thing is:I have a PHP script that runs on an apache2 web server (www-data).From this script, i want to launch a process that stays alive all the time,ut the parent script keeps on going. So I think I will need to run a command like 'at' to put the process on a queue, and the script can continue and finish, without waiting for the process to stop. But it seems like I will need to run the 'at' command as a different user, because www-data stops the 'atd' process. I'm not sure about that. Does anybody know how this could happen?
View 2 Replies View RelatedSometimes I notice that there is high upload speeds for 10 minutes or so. At the time of the screenshot I was sitting in a public wireless place, only chromium was open and I don't see any reason why there should be sustained upload speeds.Is there a GUI or CLI so I can find out which process uses the internet?
View 1 Replies View RelatedThere is text based game in the Ubuntu repos called gomoku (just 5 in a row) it comes with the package bsdgames. The manual page [URL] lists an option (-b) to run it in the background. I want to try that and if I know how it works create a simple graphical front-end. When I start the program with:
Code:
gomoku -b
it starts and remains active, the terminal does not return to prompt which is OK as the command is not finished. The manual says the program reads from stdin, and this might sound stupid but how to get anything there?
I've tried to pipe an echo command to gomoku which works but ends the program after is receives input.
Code:
echo "black" | gomoku -b
just finishes. After that when you type another command like:
Code:
echo "justsometext" | gomoku -b
gomoku tells it expects either black or white as input. So it forgot the previous "black" because it is a new instance.
How do I pass text to an already running gomoku?
resuming the any running process like yes a,vim a,copying data etc) via any terminalkill fg,bg at some instances kill -STOP pid,kill -CONT pid .
View 2 Replies View RelatedI want to kill parent process after "fork()" method. but if I kill parent process with "exit(0)" method, main() thread is terminated as well so child prosess doesn't work anymore. Is there any way to kill only parent process without affecting to child process?
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
[code]....
Consider the following code:
Code:
int main()
{
int i=0;
pid_t pid;
for(i=0;i<2;i++)
[code]....
I get the following output:
Parent: chid_pid=4356 i=0 parent's pid=4355
This is child 4356 i=0
This is child 4357 i=1
[code]....
I can observe instead of two children(as I expect) processes there are three. This is because child process 4356 creates its own child. Why all the messages of the type "This is child X i=Y" are concentrated one under another? How exactly fork works? Is affected by the fact that I have a dual-core processor?
i want a process that can operate as both a TCP echo server and a UDP echo server. The process can provide service to many clients at the same time, but involves a single process that does not start up any other threads.
View 3 Replies View Relatedset up BASH scripts on the server to automatically download and process data, and then upload it to my website. Is it even possible? Do servers allow website owners to place BASH scripts that can run automatically, or keep running indefinitely?
View 5 Replies View RelatedI am trying to generic way to convert the string datatype to other primitive data type. To achieve, i used Template . But i getting error and couldn't resolve the issue and error reported is also clueless.
Code
====
#include <vector>
#include <iostream>
#include <string>
[Code].....
I tried to get process ID using pidof. It didn't give any error but a blank output at console Code: $ pidof -s instance1
$
But when I use ps -ef, I get the process ID Code: $ ps -ef | grep instance1
root 4174 21661 0 06:52 pts/1 00:00:00 grep instance1
provgw 30220 30219 28 06:46 pts/1 00:01:44 /usr/java/jdk1.6.0_18/bin/java -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
[Code].....
I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory.
Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many, many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive.
After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes.
Now, the question is, is there any way to force groups of process to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, but that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.