In our server,top shows "si" with high number! softirq is very high on this server,approximate 90% on one core. How can I determine reason of this problem? Is there any way that I can determine cause of these softirq?what application or hardware generate these softirqs.
Several days ago, I replace my PC's motherboard and now sound is too low - I can't hear practically anything, I used alsamixer to configure output volume level to normal, but this won't survive reboot. I tried to use 'alsactl' and saveed rules, and edited rc.local to restore - but it didn't help. Can anyone tell me how to fix it?
I am trying to split a PDF into component pages that are of equal quality to the original. If I use display mypdf.pdf, then the PDF is split into pages of acceptable quality. The problem here is that I have to save each page individually. If, however, I use convert mypdf.pdf mypdf.bmp, I get the individual pages of the PDF in .BMP format (which is fine, but not exactly what I want), but the quality is substantially less than the original. I've tried dozens of combinations of commands to try to increase this quality, but to no avail.
Even if I do convert mypdf.pdf mypdfagain.pdf, there is a big loss of quality. Anyone familiar with splitting a PDF into individual pages without suffering a loss in quality? Ideally, I would just save all the "scenes/frames" from display, but that feature unfortunately does not exist (though I may endeavor myself to add it if no formal solution exists).
NOTE: I think part of my problem might be: by using identify mypdf.pdf I can see that the resolution is specified, and when I convert it the resolution is much lower. This could be a source of quality loss, but I'm not familiar enough with image conversion to say that for sure. Whatever this command does, it removes the extra layer or whatever it is that prevents OCR from succeeding. I'd really like to understand that technology.. What is it about a PDF that allows an individual to embed some meta-data into every page of the PDF so that the only thing seen, say, through OCR, or a text search function, is the embedded text?
I am kinda stuck while providing solution for the above problem. I have achieved the fail over using keepalived but not sure how can we replicate the data from one server to other seamlessly and have them in sync with each other. My prime requirement for this project is end user should not notice the fail over and replicated copy of data should be available on the secondary as well.
i have a problem here whereby some 20 over servers in my network is having regular Offset values. The configured threshold is 55, but whenever it exceeds 55, i have alerts coming in. My questions are;
1)how do i find out what is causing the Offset value to be high?
2)i have workaround in mind, that is to create a cronjob to restart the ntp daily, or hourly - is this workaround an acceptable practice?
My main concern is to find out what is causing the frequent offset spike/increase. real example from my servers:
on an old server of mine, as soon as apache is started, the load average that I get to see with 'top', that normally is under 1, now just steadily climbs up and up to easily 150, in fact disabling the webserver from serving any webpage. I've checked netstat, and I'll try to upload the output. The ip's that are in there I've blocked with iptables. But that doesn't help or so it seems. I see nothing weird in the error logs. As soon as I stop apache, the load goes back to normal. As soon as I (re)start it, up it goes again. What can cause this and how do I get rid of it?
p.s. It's an old server, fedora3 or so, and I've got a new one to which I'll transfer the domains, but until that's completely done, I'd like this one to run as it has for years...
The load average is almost 1.06, but cpu is not 100% utilized... I am just wondering why load average is 1.06 in that case still ? (well i monitored cpu for long time, it never exceeded above 40%)can anyone explain the reason behind it ? Also is the system over utilized in this case ?
I am having Red Hat LINUX 5 Enterprise Server and facing problem regarding very very high server load (load average is going high up to 60-70)due to which server is getting hang.
I can able to view there is high amount of memory & swap usage on the server, upon investigation found that there are high no of page faults happening on the server more than 300 .. Is there is any option to fix the page faults , and also how to release the locked memory with out using drop_caches..
I have a apache cluster with more than 10 nodes, based on ldirector and heartbeat. The problem is that I cannot predict if my nodes will handle the traffic in the next day (hosting a website based on daily campaigns). So I decided to limit the number of active connection on the nodes (from apache), but this is only a temporary solution. I want to create a page that will appear to users that are getting over the limit. Has anyone made this before? Can you tell me how is it possible (I don't want a how to, just a starting point to study)? I think squid can do it, but I don't know how to search for it. To give you an example of what I want, you can see the same thing on deviantart.
I am working on Ubuntu 8.04.3 OS, with this I am getting a problem, Daily my server is down on same time at 4:00 PM. I seems server is down by "kswapd0" process, I am not sure, As I run top command, I got below out put
I have 2 web servers running apache hosted at 2 data centres on 2 different IP ranges.The 2 servers are an exact clone of each other hosting www.example.com.What I am trying to achieve automatic failover. Say my first data centre gets wiped out, how would customers reach my website on my second server in the second data centre by still typing www.example.com?The aim is for the customer not to notice any difference.
I am working on linux server with below specifications.Linux EDT 2008 i686 i686 i386 GNU/LinuxWhile checking the status of the server using the command 'opmnctl status' and when server is down the output is not getting redirected to file.I m using the command as,opmnctl status > abc.txt.
I am running Apache 2.2.3 on a CentOS release 5.3 (Final) with 100 Sites. I've notice that Apache is making my server Swap around 200 MBs. "http://www.xxx.yyy.zzz/server-status" doesn't show me too much to, so I am looking the behavior of specific httpd process. ProcessID "18753" is the one for "http://www.xxx.yyy.zzz/server-status" in my browser.
This command show me (In KBs) how much virtual memory is that specific process using: # /etc/init.d/httpd start # grep Private_Dirty /proc/18753/smaps | awk '{ print $2 }' | xargs ruby -e 'puts ARGV.inject { |i, j| i.to_i + j.to_i }' 3012 ... Running this command a lot of times it gives me the same output, but suddenly... # grep Private_Dirty /proc/18753/smaps | awk '{ print $2 }' | xargs ruby -e 'puts ARGV.inject { |i, j| i.to_i + j.to_i }' 21708
Something make that process (and all the others httpd process too) to use a lot more memory!
Part of my httpd.conf: # Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3
My server is running Mysql 5.1.34, vsftpd 2.0.5, BIND 9.3.4-P1 (as slave). I couldn't found anything running in the specific time that httpd processes start to use that much memory.
Now that I have setup a proxy server, as a next step I want to run it in fail-over high availability mode, so that if one proxy is down due to any reason, second proxy should automatically be up and start serving requests.
We are using wordpress with MYSQL. Both the app and DB server are different with 6CPU and 6GB RAM and 32bit processor. We had noticed recently that mysqld process is taking too much of system usage- ranging upto 100% of CPU utilization while having the load of 1200-1600 concurrent users.
Pasting d my.cnf file- # The following options will be passed to all MySQL clients [client] port= 3306
I have a Nagios server with a lot of hosts and services; around 400 services (in all) and 150 hosts. Most of these services are programmed in bash language. The problem is the server has a high load average; between 5 and 11. The server has the next features:
- Intel Xeon 2.66GHz Dual Core - 4MB cache memory - 1GB RAM memory - 50GB hard disk
Is this load average normal? Should I program the plugins with C?
I am using Redhat linux 9.0 and using squid proxy server.My problem is tht i my squid server is responsing very slowing. whenevr i try to open sites the site starts to open after 3 or 4 seconds and often squid does not open the complete site. its stop the site in middle. My squid Configurtion is below. Is there any need to tune the system parameters Like from 'SYSCTL.conf' for better diskd performence or another problem. at this time i am using default system parameters. Please help me in detail what is the reason of squid slow performence if there ia a need of any system tuning please tell me in detail. I am very thankfull to you.I am really worried about slow performence of the squid. I also try to offline_mode on but the same problem. code...
It is now almost 3 'o clock in the night here, last 5 hours trying to fix this problem. CentOS always runs perfectly here. I am not a genie with the terminal so wanted to install GNOME on my vps, than vnc but didn't work and did a restart.
Now got this: Quote:[root@srv1 conf]# apachectl start Syntax error on line 41 of /etc/httpd/conf/httpd.conf: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration
I have an enormous quadcore machine with 16gb ram and dual gigabit NICs. It used to be for MySQL but we have upgraded the whole database infrastructure so now this server is left floating. I had the great idea of turning this into a reverse-proxy (using apache mod_proxy) and it really handles a ton of requests. But I have a feeling that we are not getting the most use out of what it can offer.
Our traffic consists of a few thousand very small (less than 10 byte) ajax calls per second, and frequently I find we are running out of kernel allocated network stack to handle all the requests. Often we get the kern.log warning "possible SYN flooding on port 80. Sending cookies." and other things like this. Obviously we are not getting SYN flooded, we just have very high demand.
So far I have found a few kernel tuning guides to tell the kernel to allocate more of the base system memory for networking but every guide I have found has been for the purpose of increasing the performance between WAN links (direct backbones between offices etc) and usually with very large file sizes being the priority. One such example (and great) write up is here:
cyberciti.biz/faq/linux-tcp-tuning/
I was hoping some people could provide further input, such as along the lines of disabling nf_conntrack (to speed up socket set up/tear down time) or anything that will speed up a high throughput proxy like mine. Any links to studies or benchmarks between different configurations or hardware gets extra points!
I have CentOS 5.5 x86_64 with Apache, php and mysql. I have just installed OTRS (helpdesk - trouble ticket system) on that server and no users. This system works with perl, apache and mysql. I notice that is slow to respond and at times unresponsive the apache welcome page. code...
I am writing some server programs. I notice that I can only connect to the server with a total of 28232 connections (clients) before I get the errno [99] "Cannot assign requested address" from the client program. I am using the same machine to do both operations. It seems like I am reaching some networking limitation outside of both programs.The server process has a maximum of 6 theads running(well below the thead limit). The client has two threads. When I add another client machine to the testing I can surpass 28232 connections). What limitation am I reaching on the server machine and how can I overcome this limitation. Again when I add a client machine to the testing, I can achieve more than 28232.
System Setup: SUSE 11.1 64 bits ulimit nofile=150000
I run a website that a very steady flow of traffic and Im seeing recent issues that I just dont like. Server is 10.04.2 on a supermicro i7-950/6gb RAM with two 500gb Samsung F3 drives in a software RAID1 (1x5400, 1x7200) and for several weeks, its been running very well. Recently, Im seeing the server hang for 5-20 seconds. IOwait goes through the roof, nothing can write to the disk. Apache logs stop, redis fails to rebuild caches, mysql errors and then it continues and moves back to normal operation
/ is ext4, the kernel was 2.6.32-server-x64 but since updating to 2.6.38-server-x64, the issue has dropped from maybe once per 10 minutes to once per 15 minutes. 3 IOstat copy/pastes show this when it hangs.
Code:
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
[code]...
No smart errors or smart diags show any issues with any of the disks and kernel.log shows near nothing other than a process hang, 120 seconds about 5 days ago.
I am trying to setup a High-Availability HTTP Load Balancer With HAProxy & Heartbeat using the below links.
I have all RHEL 5.4 servers hosted on VMWare.
[url] [url]
This is the scenario, as given in the links as wells as my setup.
Load Balancer 1
Load Balancer 2
Web Server 1
Web Server 2
I have followed all the steps mentioned in the links religiously except the 2.2 here, in which it is asking to configure the vhosts. I could not really understand , what is to be placed in /etc/httpd/conf.d/vhosts.conf file and in which Web Server.
Due to this step only, I think I am failing in Failover test given in Point 4.1 here. I am able to open the webpage by [url] which gives the content of Web Server 1 (http1.example.com). But, when I try to shutdown the http service (to check failover), it does not shows the contents of Web Server 2 (http2.example.com)
Although, I am able to succeed in Failover Test 4.2, in which shared IP 192.168.0.120 switches when I try to start/stop the any of the Load Balancers.
Yesterday I installed a new server with a large partition for my XEN images. This partition is a about 930GB. The installation tooks ages and after he finished I was finding out why that is. The SoftRAID1 I configured is rebuilding the large partition.
I have the following Cron job scheduled on my Postfix mail server: Code: 00 18 * * * /usr/bin/clamscan -r --remove /home/.This is just running a scan on my entire /home/ directory and removing any infected files it finds. My question is since this is being ran at 6pm via Cron, how can I get the results of this job emailed to me via text? Does anyone recommend a command I can add to the end which will dump the results into a file or email and send it to a specific email address? This server is my company Postfix MTA for everyone.