General :: Running Program In Background In Child Emulator Process
Aug 26, 2009
I wonder how one can, if at all, run an X program in the background *in an emulator sub-shell process*. What I mean is to launch a program in an emulator, e.g, by xterm -e gedit
but with gedit (in this example) running in the background from inside the xterm sub-process, so that the xterm will accept other commands. In the above, gedit will run in the foreground, and of course, if you do
xterm -e gedit &
then xterm will run in the background, not gedit.In short, I would like to achieve the same thing as "gedit &" as you manually do in xterm, but from another shell. What I aim to do is write an X init script to achieve this result (to have the emulator open and a program or two running from it, in the background, at the X startup).
I was wondering if 7735, 7736, 7737, 7743 were really processes. Then I checked /proc, I could cd to /proc/7735, /proc/7736, etc, but I could not ls them out. I looked at the man page of "pstree", it says,
Code:
Child threads of a process are found under the parent process and are shown with the process name in curly braces, e.g.
icecast2---13*[{icecast2}]
So, what does all this mean? Does it mean that 7735, 7736, 7737, 7743 are just threads but not processes? If so, why could I cd to /proc/<id> but not see them in "ps -elf".
How do you move a running process to the background? For example, type the command sleep 60 on the command line and try moving that process to the background.
Parent: chid_pid=4356 i=0 parent's pid=4355 This is child 4356 i=0 This is child 4357 i=1
[code]....
I can observe instead of two children(as I expect) processes there are three. This is because child process 4356 creates its own child. Why all the messages of the type "This is child X i=Y" are concentrated one under another? How exactly fork works? Is affected by the fact that I have a dual-core processor?
Description of what the code does or what i intended to do:
1. Created a child process from parent process using 'fork()'
2. Sent a signal 'SIGALRM' from child process to parent process using 'sigqueue' function.
(The Third parameter of 'siqueue' function contains the message (message msg) which the child process wants to send to the parent process.'msg' is a stucture instance containing a) pid of child and b) string) 5. Print the 'msg' sent by child process inside the signal handler function 'sig_action_function' of the parent process I am getting some junk value when this line is executed
Code:
printf("%d ",msg->cpid);
I expected to get the pid of child process, which the child process sent to parent process through the signal.
I have a doubt about signals in C programming. I have done this little program to explain it. It creates a child process with fork and, when the child ends, receives the SIGCHLD signal and wait for its termination.Ok, quite easy, BUT when I execute this code the SIGCHLD signal is received twice, first as an error (returns -1) and the second one to finish the child process.I don't understand the meaning of the first received signal. Why is it generated? Is the code wrong? (if you add the SIGINT and press Ctrl+C during the execution it also receives two signals instead of one)
I have already tried trickle and wondershaper. I need a program that can limit the speed of download/upload of an already running program. Similar to the how NetLimiter in Windows limits already running processes. Using Linux.
I have a script that calls another program/script, xxx, to run in the background. Supposedly this program at most should finish within five (5) minutes so after five (5) minutes, I run some other steps to run the script into completion. My problem is sometimes the program takes longer than five (5) minutes and this is causing problems when running the rest of the steps in the scripts. Can anyone suggest how to re-program my script. At the moment, the KSH script, i.e. test.ksh, is doing as below:
test.ksh: ..... ..... xxx/xxx.ksh <--- program/script called by the script sleep 300 ..... run the rest of the script ..... ..... problem is sometimes xxx/xxx.ksh takes longer than 300 seconds ..... ..... any way that I can monitor that xxx/xxx.ksh finishes before I run ..... ..... the rest of the scripts .....
I want to kill parent process after "fork()" method. but if I kill parent process with "exit(0)" method, main() thread is terminated as well so child prosess doesn't work anymore. Is there any way to kill only parent process without affecting to child process?
Brand new to Linux. Sort of got thrown in front of the bus if you know what I mean. The company I work for has a Linux server running CentOS 5.4 Company uses Linux for their Email, FTP and Web Server. Have been here a few years dabbling in and out of Linux and now that the old Admin has left the company.....I need to learn it ASAP. The server has run pretty solid until today.
The email server runs SendMail and SpamAssasin. Received lots of complaints today regarding extra SPAM. Noticed that SpamAssassin was not running. Tried to restart it through the WebMin tools and got the following error: Starting spamd: child process [3956] exited or timed out without signaling production of a PID file: exit 255 at /usr/bin/spamd line 2588.
In my program, I fork() to get a child process. Because of some problem, child process terminates by a segmentation fault. Parent process is still running. I have compiled my code with -g option. I have done: ulimit -c unlimited. I am not getting core dump of the child process. How can I get the core dump of child process?
I have created three child process from one parent. And different child has different functions. Child 2 has got function to load file called "wc" to count file1 and and its required to get their files by command line arguments. I can get the files through command line but couldn't get the files when child 2 process start.
I have a script that calls other scripts/commands which may or may not spawn other process. From my understanding, when I do a ps -ef, the highest numbered process ID is supposed to be the parent ID of all the other related child processes, is this correct? In most or all circumstances, I do a ps -ef | grep <processid> of my script and anything that spawns off that process IDs I assumed are the child processes of my script. If I want to terminate my script and all other child processes, then I kill the parent ID which is the highest numbered PID and this will subsequently kill all other child process IDs, is this correct?
Now, my question is whether there is any quick way of showing what are the child processes of a parent ID instead of what am currently doing now which is visually checking which one is the parent ID and "assuming" that the highest numbered PID is the parent ID of all the other processes. Below is a sample output of running ps -ef | grep exp | grep -v grep. I assume from the output below that the parent process/ID is PID 11322, is that correct?
I'm studying about signal in Linux Kernel and I got a problem about signal handler and output buffer.
I just want to know about stdout buffer related parent process and child process.
The problem is - parent process received SIGINT signal_handler that I implement is called. And after signal_handler is called, it print string "pid : xxx state : RUNNING" ... but after end of signal_handler function, child process might be print string but it isn't print at all.
I'm not asking right code, but I want to know why is this happened and concepts about signal handler, buffer - between parent process and child process.
I am facing an issue where the process starts hanging. When I closely look at the logs I come to know that some of the child processes that are forked by the parent process are not finished.
1) Is it possible that the child processes that are not finished occupy the socket memory of the parent process and ultimately a point is reached where no socket memory is available to fork new child processes.
2) What is the standard limit of socket memory in linux?
3) What is the fate of such child processes (as I have mentioned above)?
4) How to debug such cases so that the exact problematic area is identified?
I can't use shutdown -h now because I don't have permission (or root)the university script I have tells me to use ctrl-alt-delete but that doesn't shut it down like it says it should, instead it restarts it... so whats the safe way of doing this?
I know of terminating a command with & and then moving it into the background by pressing Ctrl-Z and then bg [pid], and I also know of nohup. But say you started a process that turned out to take much longer than one expected, is there a way of pulling, so to speak, this process from another terminal screen into the background so that even if I log off from the server the process would continue?
I am trying to solve one problem: When i run my process in background it hogs around 96% of CPU. But when ran in foreground, CPU utilization is almost zero. Is there any difference b/n a background and a foreground process wrto CPU utilization?
I have a root process (on linux) that forks a child and the child process then drops privileges by doing a setuid() to a normal user. After the child setuid()'s, it is of course impossible for it to gain root again by itself. But since the main process is still running as root, i was wondering if there was a simple/smart way of getting the root-master-process to elevate the child back to root (or maybe just to another non-privi uid). Is there some way to do a setuid() on another pid? or maybe something can be done through /proc/<pid>/? Killing the child is not an option (because its what it does today and im trying to find a smarter way). (The program is apache2's mpm-itk worker and the "child" is the actual apache2 process serving a page.)