Programming :: Find Number Of Threads In The Process In C Language?
Dec 23, 2009
I need to find how many threads are alive with respect to the current process for my further processing. Is there any means to trace this number ?[URL]I referred the above link. But sys/pstat.h is not in my system. Don't know which library gives this header.
I have doubt regarding cpu sharing between process and threads.In my program iam creating 4threads=> 1 process+4 threads. How is cpu alloted to these all tasks. Is here process is getting cpu time like thread or having more cpu time than threads.
i want a process that can operate as both a TCP echo server and a UDP echo server. The process can provide service to many clients at the same time, but involves a single process that does not start up any other threads.
I used 9.04 for months and it work fine before restarting my PC. After I restarted my PC, the memory consumption takes up to 4.2 GB after login. However, I cannot find any process that consume such large number of memory.
I am trying run audio conversion on my server that I want limited to a certain number of processes based on process name. I am using the following script but it isnt limiting the number of job like I want it to.
Code: #!/bin/bash $num_jobs = 13 while [ $(ps -A | grep -v grep | grep -c pacpl) -ge $num_jobs ] do sleep 1
I'm in the process of writing a program that is a server- it will accept connections and stuff, and spawn a child process for each. However, i've run into a small problem. I do NOT want to bother with keeping track of the processes unless i need to. So, i set SA_NOCLDWAIT (#ifdef) on a SIG_IGN to the SIGCHLD handler through sigaction interface. The standard says that it the kernel will then keep track of reaping zombie processes for me (a HUGE plus). However, upon receiving a SIGINT signal, i want to stop the server from accepting new connections (done), and then wait for there to be no new connections. I was thinking of just putting a loop like so:
However, I'm not *sure* that this will work, especially with SIGCHLD still ignored. So how can i tell if there are still child processes? I can't find any call like int getnumchld(pid_t proc); (i wish). Plus it would be inefficient to spin on that function anyway. OTOH, i would rather *NOT* have to do the same thing in a loop with a system("ps |...>file"); read(file); etc. either. Is there a way i can portably implement this feature (I was hoping i could run it on linux and the major BSDs, at least).
TO SUM IT UP:
How can i tell if a process has no child processes if i've SIG_IGN'd SA_NOCLDWAIT'd the SIGCHLD? Is there a _reasonably_ portable way to do so? I *don't* want to manually wait for EVERY process. Maybe only those still active at the time of SIGTERM, but that requires keeping track of the number of connections and whether those have terminated...
EDIT: Does anyone know if the above code *would* work, even with SIGCHLD ignored and the kernel cleaning up zombies *for* me? I checked the manpage and it doesn't say much.
EDIT1: Note that all of the processes are in the same process group and session. SO i can find them through this as well. Perhaps even setting the uid/gid and finding all processes run by that group?
EDIT2: i have an idea if the above isn't feasible. If there is no "elegant" way to do it, i could reduce the complexity by sending a SIGUSR1 to the whole process group. Each process would then set a flag telling it to send a SIGUSR1 in reply and send a SIGUSR2 when it is done executing. Then i could keep a count of signals. Maybe that would be *easier*. Or perhaps a count of all child processes and just a termination signal to decrement the counter.
So, this is a simple download scheduler program code. Which creates multiple threads of the downloading process - wget (i could also have used 'curl' instead 'wget').Can you debug this code?
I had xp, added win7, and now added kubuntu.But can't choose the OS to load on grub.Can't find a fix through the posted threads, can you guide me to solving the problem?
In all the examples I have found the server accepts the client's conection, proccess the data received and close the socket. In an very schematic way it would be something like:
Code:
client_thread{ select to see if there is data to read from socket fd if there is something to read{
[code]....
Should I use mutexs or semaphores to block the socket fd before read and write or it is not necesary?
I wrote a C program using Pthreads to compute the product of 2 matrices. Each element in the product matrix is computed in a separate thread. Eg: Thread (i,j) computes the element C[i][j] of the matrix C, where C=A*B. A is m*n, B is n*p, C is m*p. m,n,p are given as command-line arguments. A and B are initialized to random values from 1 to 10, while all elements of C are initialized to -1.But some threads do not get their arguments (i,j) correctly. So some elements C[i][j] still remain as -1, even after the program is over. My OS is Ubuntu 10.10 (Maverick Meerkat) 32-bit.I ran the program on another computer and it worked correctly. Is it due to a problem in the Pthreads library in my OS? Please help me. I have attached the source code.
I have 2 threads and both of them are deleting memory at the end nedded by both.
My problem is that maybe it can happen that a thread start and finish before the other one starts and so it deletes the memory nedded by the other thread. How can I synchronize them so that this can't happend.
As a design my threads look like this:
Code:
The other thread looks the same, but this isn't unoff to stop thread1 to finish before thread2 starts.
When I click it, a window opens but I loose the menu bar. I click on a post in this window, and the same happens. So, if I want to save the page, I have no way to do it.
Fedora15 32bit. I write a test program, it creates new thread continually, the thread does nothing but sleep. I find virtual memory increases up almost 10Mb when a new thread is created. and when there's more than 200 threads, the virtual memory used by the program is 3Gb, and now cann't create new thread. but on windows, it costs little memory. What can I do to config the operation system to take less memory on threads?
I am runig a program on a server at my university that has 4 Dual-Core AMD Opteron(tm) Processor 2210 HE and the O.S. is Linux version 2.6.27.25-78.2.56.fc9.x86_64. My program implements Conways Game of Life and it runs using pthreads and openmp. I timed the parrallel part of the program using the getimeofday() function using 1-8 threads. But the timings don't seem right. I get the biggest time using 1 thread(as expected), then the time gets smaller. But the smallest time i get is when i use 4 threads.
Here is an example when i use an array 1000x1000.
Using 1 thread~9,62 sec, Using 2 Threads~4,73 sec, Using 3 ~ 3.64 sec, Using 4~2.99 sec, Using 5 ~4,19 sec, Using 6~3.84, Using 7~3.34, Using 8~3.12.The above timings are when i use pthreads. When i use openmp the timing are smaller but follow the same pattern.I expected that the time would decrease from 1-8 because of the 4 Dual core cpus? I thought that because there are 4 cpus with 2 cores each, 8 threads could run at the same time. Does it have to do with the operating system that the server runs?
Also i tested the same programs on another server that has 7 Dual-Core AMD Opteron(tm) Processor 8214 and runs Linux version 2.6.18-194.3.1.el5. There the timings i get are what i expected. The timings get smaller starting from 1(the biggest) to 8(smallest excecution time).The program implements the Game of Life correct, both using pthreads and openmp, i just cant figure out why the timings are like the example i posted. So in conclusion, my questions are:
1) The number of threads that can run at the same time on a system depends by the cores of the cpus?it depends only by the cpus although each cpu has more than one cores? It depends by all the previous and the Operating System?
2) Does it have to do with the way i divide the 1000x1000 array to the number of threads? But if i did then the openmp code wouldn't give the same pattern of timings?
I've implemented a program URL... which reads digital IF data from a radio receiver through a named pipe, measures power levels, and sends the result to stdout. The program is interactive; there is a thread that reads from stdin to watch for commands, a thread that constantly either reads data from the named pipe or throws data away, and an array of processing threads. The program uses GTK+extra to plot the signals. The IF data stream bandwidth exists at the limits of today's technology (is very very fast).
Problem Statement:The program works fine with a few bugs. I've learned since I've made it that using global state variables to coordinate threads isn't a good way of doing it. I also only had knowledge of mutexes and polled the state variable instead of using other methods.My reimplementation will use the following:
- One "Stdin Command Monitoring" thread - One "Get data from named pipe" thread - One post-processor thread - N Processing threads
All threads are alive during the life of main()There are N buffers. Data will come in from the named pipe, and the "Get data" thread will write the data to an "available" buffer. When the buffer is full it will be marked as "full". There will be N processing threads, one for each buffer. When a processing threads' buffer is full, it will process the buffer and save the result to a final buffer. At the end of a number of averages, the post-processor thread will perform a final process on the final buffer and send the results to stdout.
I'm a bit worried about "too many mutexes" in my little curses-based app and would like to get confirmation/opinions that I'm doing this right. I've got an array: int nums[60] I've got 61 threads. 1-60 are doing math on the value in their array index (ie: thread1 increments nums[1], threadN increments nums[N]), then sleep(1) The 61st thread is my curses thread which does a for-loop over the array and prints out all the values to the screen, then sleep(1)
Right now, I've got 1 mutex which gets locked/unlocked each time one of the 60 threads needs to update its array-index with a new value, and the 61st thread locks the same mutex just before the for-loop beings reading the values and unlocks after ending the loop.
My questions: A) Does the above seem OK? (I know it's ok, cause everything works right now but would like opinions on it) B) Do I even need the mutexes since 1-60 only ever update their own index and 61 just reads? C) If I do need the mutex protection, is there a better, more efficient way?
I am working with a C++ program consisting of two threads. The first threads receives packets through an UDP unicast connection and stores them in a buffer. The second thread reads the packets from the buffer and sends them through an UDP multicast connection. Both use blocking sockets and share a common buffer and a linked list L1, which are protected by mutexes. The program seemed to work just fine, receiving a packet and sending it almost immediately, but started giving some trouble recently. The synchronization between both thread started failing, and I decided to use a non-blocking socket in the sending thread. As a consequence, sendto() doesn't work in some cases, causing an errno 11 (Resource unavailable).
I want to communicate between two threads, each belonging to a different process. Iam using message queues for this. I use mq_open()call. I created the queues with the same queue name starting with a '/'. But when I open the queue, the queue ID is different in both the process. What should I do so that both the process have the same queue ID?