Server :: Increase Number Of Url_rewriter Processes In Squid?
Feb 28, 2009Code...
I don't know how to increase the number of url_rewriter processes.
Code...
I don't know how to increase the number of url_rewriter processes.
I know of /etc/security/limits.conf and that can be used to limit all sorts of good things, but I haven't found anything that talks about using this when the users come from LDAP. Would I be able to do something like
@"Domain Users" soft nproc 25
@"Domain Users" hard nproc 40
where Domain Users is the group all users belong to in our system.
In previous Fedora, we may add the following line to /etc/modprobe.conf
options loop max_loop=64
to increase the loop devices to 64.
However, this method no longer works in Fedora 13.
May I know how to increase loop devices in Fedora 13?
I would like to do the following: Create a banner for any user logging in through ssh which warns him/her about the number of processors being used already by other users (or conversely the number of free processors). For example, if a user logged in he would then see a message like: Warning! 7 out of 8 processors are in use.I already figured out how to do a banner and with ps -e -o pcpu I can get all processes' %CPU usage. I think I would like to count the number of processes which have more than 90% CPU usage and output this number ("7" in the example) in the banner
View 7 Replies View RelatedOptimum number of processes for procesor In Linux based os are there a optimum number of process for a processor that gives 'maximum performance' for system(or process range depend on cpu speed,cache etc...)? By 'maximum performance' I mean better performance?
View 5 Replies View Relatedi have a slave disk with some data formatted in ext4 , now i have 95 % of inode used ( and 50% of used space )how can increase inode ?
View 5 Replies View RelatedHow can the number of inodes be increased on an existing EXT3 or EXT4 partition without re-creating the partition?
View 3 Replies View RelatedI'm in the process of writing a program that is a server- it will accept connections and stuff, and spawn a child process for each. However, i've run into a small problem. I do NOT want to bother with keeping track of the processes unless i need to. So, i set SA_NOCLDWAIT (#ifdef) on a SIG_IGN to the SIGCHLD handler through sigaction interface. The standard says that it the kernel will then keep track of reaping zombie processes for me (a HUGE plus). However, upon receiving a SIGINT signal, i want to stop the server from accepting new connections (done), and then wait for there to be no new connections. I was thinking of just putting a loop like so:
Code:
while((wait(NULL) != (pid_t)-1) && errno != ECHILD);
However, I'm not *sure* that this will work, especially with SIGCHLD still ignored. So how can i tell if there are still child processes? I can't find any call like int getnumchld(pid_t proc); (i wish). Plus it would be inefficient to spin on that function anyway. OTOH, i would rather *NOT* have to do the same thing in a loop with a system("ps |...>file"); read(file); etc. either. Is there a way i can portably implement this feature (I was hoping i could run it on linux and the major BSDs, at least).
TO SUM IT UP:
How can i tell if a process has no child processes if i've SIG_IGN'd SA_NOCLDWAIT'd the SIGCHLD? Is there a _reasonably_ portable way to do so? I *don't* want to manually wait for EVERY process. Maybe only those still active at the time of SIGTERM, but that requires keeping track of the number of connections and whether those have terminated...
EDIT: Does anyone know if the above code *would* work, even with SIGCHLD ignored and the kernel cleaning up zombies *for* me? I checked the manpage and it doesn't say much.
EDIT1: Note that all of the processes are in the same process group and session. SO i can find them through this as well. Perhaps even setting the uid/gid and finding all processes run by that group?
EDIT2: i have an idea if the above isn't feasible. If there is no "elegant" way to do it, i could reduce the complexity by sending a SIGUSR1 to the whole process group. Each process would then set a flag telling it to send a SIGUSR1 in reply and send a SIGUSR2 when it is done executing. Then i could keep a count of signals. Maybe that would be *easier*. Or perhaps a count of all child processes and just a termination signal to decrement the counter.
I'm trying to get the end result to have the same format as this as well:
1 bin
2 daemon
67 erozner
[code]....
Where the numbers are the number of processes being run by the user (the name right next to it).if I input the command egrep myFile into the terminal, it should look for every line with the letter x in myFile, right?
My squid server works fine in fedora 11 system . Is there any web like interface for admins to create,change,modify users of squid and to view their logs.
View 1 Replies View RelatedI would like to ask some help and tutorial for setting up and how to configure squid proxy server in my (Home PC Server). I am a newbie in Linux Centos. I already installed in my system the CentOS 5.5 . Now, I want to configure it as my internet server, all of my 4 system running in Windows including the laptop I want to connect through my CentOS pc with username authentication. I assign all IP address by static. see tthe attachement in my set up. [url] I just want to know what I need to change and add in my squid config file. And how can I configure properly my CentOS with 2 LAN card as internet server.
View 1 Replies View RelatedHow can I increase the size of a server partition as /dev/loop0
Disk informationDeviceMount pointUsage
/dev/loop0/var/tmp2% (11,070 of 495,844)
/dev/sda1/46% (100,819,056 of 233,872,292)
/usr/tmpDSK/tmp3% (11,070 of 495,844)
I use
WHM 11.30.1 (build 4)
CENTOS 5.6 i686 standard on ds-59085
I run a server with nginx at the front and apache at the background. Nginx serves as a reverse proxy in here. As there were lots of DOS attacks, I have implemented Deflate-DDOS, APF and Nginx anti-ddos features and the server runs without a problem. Once every month, the load on server increases upto 300 and I receive an email like this :
Subject : Cron <root@server> [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm
Content :
find: `/var/lib/php5/sess_0ca40520ac8ecb66090746f90da17516': No such file or directory
find: `/var/lib/php5/sess_3839b0cbf042934183d56ff682c948e0': No such file or directory
find: `/var/lib/php5/sess_dec1ed8ea1f62caf7a42a29b9f82c506': No such file or directory
[code]....
There's more that 100,000 ( I suppose ) lines on this email. I have no idea what the problem is. It seems there's something with the session, but I really don't know what it is.
I am implementing a proxy server in c++. It is multithreaded(posix).Used CPU : Xeon(8core) Thread number : 8 One main thread, and other 7 thread created by the main thread. The main thread always listen to ports. When the main thread gets a client data it push the request in a queue[there are one queue(total 7) for each thread] based on ip and then give a signal to the appropriate thread. Then that thread gets the request from it's queue and process data and then forward the data to a appropriate destination.
There is another important thing, I assign each thread excluding the main thread to individual core by using affinity.The main thread listens to 5 ports. Test environment: We run the server. The client sends audio data at a particular rate.
1. The main thread CPU usage gets overloaded (above 80%) after a certain load from client.
2. Other cores remain about 0-10%.
The thing is that we want to distribute the load among all the cores equally by multithreading. But how can we do this ? Can the listening task of ports also be distributed ? I need an efficient algorithm for load balancing among threads. The data sent and receive rate of server is about 8.5MB/s. How can we improve this ? we are using gigabit LAN card. When the server only receive data from client it can receive data above 80MB/s. But when it both receives and sends data simultaneously it only manage upto 8.5MB/s.
Can anyone walk me through the process of increasing my max connection on my linux server?Over the last few weeks I have been getting errors saying I have to many connections.I think the default is 100 and I would like to maybe increase it to 150 or 200I know I cannot go to high because I will then be using to much of my memory or maybe CPU
View 8 Replies View Relatedmy /var dir partation /dev/sda7 of 965M, how to increse this size using free available disk space?
[root@SANJAY-CMS 500]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 4.8G 3.1G 1.5G 69% /
[code]....
Is it possible to increase the size of the / partition?
View 5 Replies View RelatedI have running license server on my server. Right now I would like to write small status script and check if software is running.My software include 3 deamons:
1) daemonA
2) daemonB
3) daemonC
My script should check, if each of this deamon is running. If all deamons are running then script should print short output: "License server is running" if one of this daemons is not running, output should "License server is not running". Is it possible to write small loop to check it ? Let say, loop will take new daemon name from deamons pool and will check if its running. Sometimes I need to check more than three daemons of one Program and I dont know how to write good script for this. Maybe somebody could help me with this loop that in the future I could also use; daemonD, daemonE, daemonF.etc.etc. if all daemons from pool is running then..."Software is running"
I have some domains on a VPS server. Typical account memory usage for all domains runs at 50% of available, but I have a problem. One domain is causing me trouble because intermittently traffic will spike on that domain, causing so many requests within 1 min that I exceed my memory allocation for my entire VPS package. Apache is then killed but the virtualization software and Apache must then be restarted.
A sample snippet from tops right before the sever went down would like like this:
All of that memory usage adds up. I would like to "throttle" the number of processes that user/domain can run. I think this would be a quick and easy way to keep the domain from taking down my entire VPS. My understanding is that I could do this with the /etc/security/limits.conf file.
Is that correct?
I have never done this before. Do I want to set a hard or soft limit? I think if I wanted to limit the number of processes for "coldclim" to 15 I would add a line to limits.conf like this:
Code:
Assuming that is correct, can anyone tell me how the website would respond once it reached its limit? Would visitor queries become sluggish, or would the website not come up for them at all?
I try to write a script which would kill processes of users who are not logged in. My approach is to find out what users are logged in and then kill processes of all nonsystem users who fail the test of being logged. I use `w` for finding all logged in users, but apparently there are users on the list which `w` gives me who own absolutely no process in the output of `ps aux`. How do I log off those users, since killing their processes wont work (since they own no processes)?
View 4 Replies View RelatedI have a problem with the permission of the directories under /proc, they are readable and accessible only by Owner (they have permission 500 instead of the usual 555) As a consequence, the processes are visible only to the Owners or to Root. For exampleif I want to check if there is mysql
I see it only with the user mysql or with root because the directory has permission 500
This problem obstacles the functioning of some applications that should check the existence of some processes managed by other users. At the beginning all was working well. But after a while the problem appeared and I dont know which is the reason of it. how to restore the standard management of permissions of / proc?]
I have a Ubuntu server Maverick 10:10.
My University gives us access to a Linux server, named stud1.
Code:
me@stud1:~$ uname -a
Linux stud1.some.univ.ac 2.6.9-89.31.1.ELsmp #1 SMP Mon Oct 4 21:41:59 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
Apparently I was logged in, and never logged out sometime:
Code:
me@stud1:~$ who
<snip>
me pts/37 Jan 30 13:27 (6.6.66.66)
<snip>
me pts/58 Dec 30 19:13
but when trying to find out why I'm still logged in, I can't find it:
Code:
me@stud1:~$ ps faux |grep me
root 30030 0.0 0.0 51128 4360 ? Ss 13:27 0:00 \_ sshd: me [priv]
me 30033 0.0 0.0 51132 2336 ? S 13:27 0:00 \_ sshd: me@pts/37
[code]....
how can I logout this unused session?
I googled upto my capability but cant find answer so asking you In one of my apache 2.0.52 machines, we are using worker MPM model. Even if I use 1 start servers, the # of http processes it starts is 5. No matter what value I pass in StartServers, I dont see more than 5-6 http processes
MaxClients 1
ServerLimit 1
ThreadsPerChild 1
StartServers 1
I'm running several SHOUTcast server instances and a WowzaMediaServer instance on a CentOS machine. I'm experiencing a memory leak problem, but I can't figure out which processes are eating memory.
TOP command reports as follows:
[Code]...
Something misterious to me (I'm still a Linux newbie) is that TOP reports a total of 7.5GB used ram but very small percentage for single process (0-1%). Memory consumption starts at 1GB/8GB after reboot and in three days running gradually increases up to 8GB. I'm practising with Linux, but I still miss a lot to understand what's happening on my system. For instance, are there linux kernel logs saved somewhere that I can look at?
How to identify which processes (or PIDs) are consuming SWAP? In my RHEL box SWAP is nearly 100 % utilized.
Code:
$ free -m
total used free shared buffers cached
Mem: 144967 143212 1754 0 166 135259
-/+ buffers/cache: 7787 137180
Swap: 22367 21733 634
I have a dedicated server which I run on him only one website which has about 11k-15k unique visitors per day.
The httpd processes stuck it almost everyday and I can't understand why... I fix it via "killall -9 httpd" and then "service httpd start". code...
Here is my query:
Squid document says that Squid accepts only HTTP requests but speaks FTP on the server side when FTP object are requested.
We call Squid HTTP and FTP caching proxy server. Does it also caches FTP contents? Is it possible to configure FTP clients to use Squid cache? When we make an FTP request to an FTP site via Squid will it be bypassed?
I want to make a transparent squid proxy server in centos. The squid proxy version is 2.6 stable. I made a normal squid server but want to make it transparent so that users do not need to enter the proxy settings in web browser. Even i searched about this on google but not getting it properly.I have two lan cards on centos system. ETH1 used for LAN and ETH2 used for WAN. And in this squid.conf i written "http_port 172.16.31.1:3128 transparent" and i also added a rule in iptables which is "iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-port 3128" but still i have to enter proxy settings at client's web browser to use internet
View 4 Replies View RelatedI would like to install and configure Transparent squid proxy on a gateway server ,but i dont have a local OR intranet DNS server.I am facing issues do that ,regard .My IP series is 192.168.1.1/24
View 5 Replies View RelatedI know this seems obvious but i'm stuck. I'm trying to install squid via the command "yum install squid" and here is the output:Quote:
Setting up Install Process
Setting up repositories
update 100% |=========================| 951 B 00:00
[code]....