Ubuntu Servers :: Run High-priority Multi-threaded Processes In The Background?
Apr 7, 2011
Using Ubuntu server 10.04.2 64-bit all up to date.
I am running multi-threaded processes. These use OpenMP in my own code and the multi-threaded ACML maths library. When run in the foreground, everything is fine i.e. if I have set
export OMP_NUM_THREADS=8
then when I start all 8 cores are in use and things whizz along. However, when running overnight and logged out using e.g. 'at now + 1 minute' then the command, I am only getting about 130% CPU and it slows down accordingly. I have tried renice'ing and calling from within a bash script in case sh is doing something odd but nothing seems to solve it. I am sure that in the recent past this wasn't the case.
The libraries being used are shared versions in case that might have any bearing.
View 1 Replies
ADVERTISEMENT
Apr 20, 2011
I ran into an inconsistency in handling timers (VTALRM) between AMD and intel platforms with threads. My understanding was that the timers are per process. I discovered I must call setitimer in the thread on intel though. AMD allows me to make the setitimer call once in the main thread as expected. The code below demonstrates the issue. I must add the code in the INTEL define for it to work properly on intel cpu's. Am I missing something dumb??
[Code]...
View 4 Replies
View Related
Jun 15, 2010
writing multi threaded programs in C,C++.
View 3 Replies
View Related
Aug 12, 2010
For quite some time now, I've been trying to implement a multi-threaded or forked TCP server to perform the following action:1) Listen for new connections on all ip addresses on a specific port.2) Wait for a new connection to arrive3) When a new connection coms in, fork or thread to free up the listening process so it can go back to #2.When handling a client after it's been forked/threaded4) Get the connected client's ip address.5) Perform some in-house processing;6) Read the data sent from the client (checking for http connections)7) Perform some more processing;8) Write some data back to the client based on the above processing9) Close the socket/thread/forkI've tried implementing an solution similar to the multi-threaded example show here http://perl.active-venture.com/pod/p...-sockets.html; as well as looking at the CPAN solutions Net:aemon and POE::Component::Server::TCP but these don't seem to give me the information needed.The active-venture solution provides me with what I need, but randomly exits without reason/error and when attempting to change the 'for' loop to a 'while' loop, it keeps throwing up different errors.
View 2 Replies
View Related
Jul 24, 2010
I'm trying to add local sockets in my multi-threaded application in order to exchange data between threads. The only problem I got is that most of the information available on the net is related to internet oriented socket programming whileI want to perform local connections. got a thread that does the sniffing via libpcap. And I would like that thread to send each captured packet to a second thread that will analyse the packetof the thread implementations is written in separate .h file.Or maybe there is a more effective method of exchanging data between threads
View 14 Replies
View Related
Apr 29, 2011
I am implementing a proxy server in c++. It is multithreaded(posix).Used CPU : Xeon(8core) Thread number : 8 One main thread, and other 7 thread created by the main thread. The main thread always listen to ports. When the main thread gets a client data it push the request in a queue[there are one queue(total 7) for each thread] based on ip and then give a signal to the appropriate thread. Then that thread gets the request from it's queue and process data and then forward the data to a appropriate destination.
There is another important thing, I assign each thread excluding the main thread to individual core by using affinity.The main thread listens to 5 ports. Test environment: We run the server. The client sends audio data at a particular rate.
1. The main thread CPU usage gets overloaded (above 80%) after a certain load from client.
2. Other cores remain about 0-10%.
The thing is that we want to distribute the load among all the cores equally by multithreading. But how can we do this ? Can the listening task of ports also be distributed ? I need an efficient algorithm for load balancing among threads. The data sent and receive rate of server is about 8.5MB/s. How can we improve this ? we are using gigabit LAN card. When the server only receive data from client it can receive data above 80MB/s. But when it both receives and sends data simultaneously it only manage upto 8.5MB/s.
View 1 Replies
View Related
Dec 18, 2009
I want to set my program with highest priority for real-time processes. I understand the priority for real-time processes ranges from 1-99, but I get completely confused after reading Understanding the Linux Kernel and the manual of sched_get_priority_max(): which value has higher priority? either 1 or 99. Understanding the Linux Kernel says 1 has highest priority but sched_get_priority_max() returns 99 for SCHED_FIFO and SCHED_RR policy.
View 3 Replies
View Related
Nov 20, 2010
It is now almost 3 'o clock in the night here, last 5 hours trying to fix this problem. CentOS always runs perfectly here. I am not a genie with the terminal so wanted to install GNOME on my vps, than vnc but didn't work and did a restart.
Now got this:
Quote:[root@srv1 conf]# apachectl start
Syntax error on line 41 of /etc/httpd/conf/httpd.conf: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration
View 1 Replies
View Related
Nov 30, 2008
I can be granted permission, and I'm in the proper groups, but pulseaudio tells me I can't get permission.. I wanted to see if anybody else had this problem / knew of a solution / isolate whether it is a pulseaudio, policykit, whatever problem, before I post a bug upstream.
Code:
[Sun 07:25 AM] ~ $ rpm -qi pulseaudio | awk '/Version|Release/ {print $1$2$3}'
Version:0.9.13
Release:3.4
[Sun 07:25 AM] ~ $ groups
[Code].....
View 8 Replies
View Related
Nov 21, 2010
Prevent flash from running threads in high priority
View 6 Replies
View Related
Apr 9, 2010
I am getting a strange problem with my new machine (P4 3Ghz, 1 GB RAM DDR333). The machine is an industrial PC. Firstly I had installed Fedora Core 2 on it. It ran superbly without any problem. I tried to load Redhat Enterprise Linux WS4 (Update 2 as well as Update 5) on it. But the PC was giving high CPU utilization for each and every task. With-out any application running both the cores shows utilization around 10 %. But when i try to my application, the CPU utilization in one of the cores goes to 100 % for majority of the time. This is causing my appilcation to run slowly when compared to the same application running on the same machine under fedora core 2. (CPU Utilization around peak 17 % in either of the cores in Fedora Core2) Recently I installed Cent OS 5 on it. But the behavior of the PC remains the same as of Redhat Enterprise Linux WS4. Some-where on the forums I had read about the RAM size. So i tried to downgrade the RAM from 1 GB to 256 MB. But the problem remains the same. I think it has to do with some kernel tweaking.
View 8 Replies
View Related
Sep 2, 2010
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
Code:
#!/bin/sh
#G.sh 20100813
USAGE="Usage: $0 processName"
[Code].....
View 14 Replies
View Related
Jun 27, 2011
I try to open an application, and it's painfully slow, and i think "this is weird i've had it open just 10 seconds ago and it should be in the cache". the disk is constantly working, so maybe i ran out of ram and it's swapping, so i check: according to 'free -m', about 200m of 2g ram is used (without buffers/cache), but still also 200m of swap is used, although swappiness is set to 1 (low tendency to swap). so this is not normal.also, kswapd0 is eating pretty much cpu (but not 100%). switching to tty1, logging in, and starting iotop takes about a minute. according to iotop, about 8 random processes (could be a browser, could be some daemon) have 99% IO activity (i'm not sure what that means, how can 8 processes take 99% each?).after the first 3 times, i disabled swap by adding swapoff -a to rc.local, but it's still the same, and kswapd is still among the cpu-eaters.
View 4 Replies
View Related
Jun 13, 2010
I am a bit confused with MPM. I read one the article here: [URL]. Still have few very basic doubts:
1. What exactly is a MPM, a module has a specific function to do whats the specific function of MPM?
2. What are the "multi processes" it handles? Is it connections?
Quoting from the articles:
"The main difference between MPMs and normal modules is that only one of the former can be used and multiple ones can be loaded in the latter".
3. There are multiple MPM but aren't they operate differently and may cause conflict when more than one is loaded and operating?
I am really looking for concept of MPM
View 2 Replies
View Related
Apr 28, 2010
I just switch to fedora from windows recently. And I love the terminal of fedora alot. The problem is when I run some command on the terminal, I need to wait for that command to finish before executing another command. This is very inconvinient, say If I open eclipse using the terminal, this eclipse program will hog to the terminal until I closed it. So if I want to use terminal again I have to open another one.Hence the question is: Is there any way open multi processes(command) using only one terminal?
View 4 Replies
View Related
Feb 2, 2009
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
fsck:
fsck /: (this is repeated 900+ times)
[code]....
View 1 Replies
View Related
Feb 2, 2009
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
fsck:
This might be something related with shadow file.
Here is part of /etc/shadow
View 3 Replies
View Related
Oct 19, 2010
i am using ubuntu 10.4, i have attachedd my desktop screenshot , my problem is that i use no software that uses network but still OS uploads data from my pc, is it something to worry about my network security ? and is any way to check which file is using network and how much it is using. [URL]...
View 1 Replies
View Related
Apr 11, 2011
I have written a simple script which has to find required patterns from a bunch of files ( where each file is around 2 GB each,which contain the output of seq 1 10000000000000) on an 8 core machine.I am current forking 6 child processes which run simultaneously on 6 cores of the processor & have to search for the required pattern in 6 different files & inform the parent process when a pattern is found using a PIPE.
The problem is,when a child process is done reading a text file looking for a pattern,it is becoming a zombie process.It exits cleanly when i put a $SIG{CHLD} = "IGNORE"; in the script.Can any one tell me whats going on & how do i improve the communication between child and parent processes?
Code:
#!/bin/perl
use strict;
[code]...
View 3 Replies
View Related
Jan 4, 2011
I have something like:cd project && python manage.py runserver &cd utilities && ./coffee_auto_compiler.pyAnd I want both of them to close on Ctrl-C (or some other command). How can I accomplish that?EDIT:I tried using jobs -x kill and kill `jobs -p `, but it doesn't seem to kill what I need. Here is what I mean:oon 8119 0.0 0.0 7556 3008 pts/0 S 13:17 0:00 /bin/bashmoon 8120 6.8 0.4 24568 18928 pts/0 S 13:17 0:00 python manage.py runserverjobs -p give me just process 8119, but I also need to close 8120, since it's the thing that the first command opened.If it helps, the commands are actually in a Makefile, and I want it to run two daemons at the same time (and somehow close them at the same time).
View 3 Replies
View Related
Sep 10, 2010
While executing df command on an AIX Console, by mistake I ended the line with an ampersand:
[Code]...
View 5 Replies
View Related
Sep 22, 2010
I wrote a program that multiplies 2 matrices using multi-threads and another one using multiple processes and shared memory. Both in C.I need to find the total memory usage of these programs. I know of the top command, but when my matrices are relatively small they don't even show up on top because they complete so fast, how can I find the memory usage for these instances?Also, how can I find the total turnaround time of my programs
View 1 Replies
View Related
Dec 9, 2010
I have BT4 as an iso image and start it up by booting from cd, when i try the command root@bt~:startx it comes up with this fatal error. what can i do to get this work? and of course im new at this.
View 5 Replies
View Related
May 12, 2011
I'm trying to dump a mysql database on a small web server without killing performance. I tried using the nice command to give the mysqldump and gzip a low priority, but gzip is still taking up 100% CPU. Pages on the web server are loading incredibly slow. Here's my command:
Code:
nice -n 19 mysqldump -u USER -pPASSWORD DATABASE | nice -n 19 gzip -9 > OUTFILE.sql.gz How do I get gzip to run without taking up 100% CPU? I've attached a screenshot of top about 8 seconds into the dump.
View 2 Replies
View Related
Sep 22, 2010
I have an issue on one of my servers whereby the [normally very helpful] du and tar programs are somehow using up too much or my system resources (du 40% mem, tar 20% mem) and causing problems. I am after a command which is able to kill a process without knowledge of a PID but by process name e.g. "du" and memory usage e.g. >= 10%.
Something along the lines of:
kill $(pgrep du) grep %MEM > 10
Although I know that is invalid syntax I cannot fathom the correct/best way to achieve this end!
View 9 Replies
View Related
Jun 22, 2011
After an update recently I noticed that my process count jumped up quite a bit. Somehow it doesn't seem related (it was an apt update I believe), but I'll just throw it out there. All of the extra processes seem to be related to XFS and JFS file system kernel processes, but none of my file systems use XFS nor JFS, just EXT3 & EXT4. Is there any safe/easy way to kill off these processes and prevent them from re-spawning? I don't find having irrelevant idle processes to be beneficial nor efficient. It's using Ubuntu 10.04 64-bit. Only active file systems are EXT4 and EXT3.
[Code]...
View 2 Replies
View Related
Feb 22, 2010
I'm looking for a command that will give me a list of users (unique, dont name my user account 60 times) that are running processes on a system.
View 5 Replies
View Related
Apr 1, 2010
When I open top and look at the running processes, there a bunch that are -5 in the nice and 0 with everything else.
[Code]....
View 4 Replies
View Related
Apr 19, 2010
I have several web servers running Ubuntu 8.04 64-bit server and occasionally Apache sends my load to 13 and higher.
Is there a log that actually tracks the system load levels and possibly the processes running at the time and their percentage of the load?
At the basic level what I am looking for would be a log of top but not exactly that.
View 1 Replies
View Related
May 19, 2010
The Machine
Core 2 Duo E4600
2GB DDR2 RAM (1 stick)
Intel ICH10R based motherboard (tried an ICH9R aswell)
4-port SATA controller (PCI Sil 3114)
O/S: Ubuntu Desktop x64 10.04 LTS (using 'desktop' because I like having a remote desktop)
The Storage Setup Disks: Assorted selection of 9 disk. 750GB, 1000GB and 1500GB Seagate and Western Digital disks. The disks are joined through a standard LVM2 configuration. I don't know the LVM term, but normally you'd call it a JBOD setup. On that LVM device, I've put a cryptsetup device, made with the LUKS tools (aes-xts-plain 256) On the cryptsetup device, I've created and mounted an EXT4 partition.
All in all, a completely standard LVM2 and LUKS setup, running EXT4. After a reboot, I proceed to unlock my cryptsetup encryption device, and then mount the EXT4 partition. All is well, the mount is accessible and everything looks fine. I then try to send a file to the mount, via Samba. After a few hundred MB written, the I/O wait goes berserk. It stays at 50% (dual core setup remember). The system becomes unresponsive to network commands (can't browse samba) for about 5-10 minutes. When it finally responds, the I/O wait is gone and everything is now fine. I can write and read hundreds of GB's of data without any issues at all. I can benchmark and stress all disks perfectly fine and no logs are showing disk errors.
I tried monitoring my disks with 'iostat -d 2' while the I/O wait was happening, and there is some slight Blk_read/s activity on 1 disk at a time. First for example /dev/sda is showing a little Blk_read/s acitivty, then it jumps to the next disk, and when every disk has show that slight Blk_read/s activity (500-800 or so) the problem is gone and the I/O wait is no more. I've tried changing motherboards, switching disks around on the controllers, checking individual disks, replacing disks and I've tried different versions of Ubuntu. The problem however persists. I could see it being a network issue, possibly a driver issue. But since the NIC is a standard RTL8111 on-board it seems unlike that the problem wouldn't be more widespread since this NIC is litterally being used everywhere. I did change my motherboard, so a faulty NIC seems unlikely twice in a row.
View 9 Replies
View Related