I have a server in a data centre, which is supposed to have a 100Mbit line. Peak rates at the moment on my server are in the region of 20Mb, which should be easily handled. Is there anyway I can trace how much more bandwidth is available at any one time or if things are becoming sluggish on the server?
Last week my server crashed. I'm trying to diagnose the cause.
This is the relevant error message in /var/log/messages:
Code:
I'm assuming that I can conclude, then, that apache/httpd was the cause of the memory leak?
Next, I've been tracking my memory usage. Using top, this is an average memory load level for my server:
Code:
I'd like to confirm if my understanding of this data is correct, because Plesk indicates that my memory usage is only 50% or less. (Though I have read a number of reports indicating that Plesk's measurements are frequently wrong.)
Top says: Of the 2,073,156K total memory, 1,982,572K (95.63%) is being used, 90,584K (4.37%) is free. Of that sum, 421,948K (20.35%) are being used as buffers. Additionally, of the 2,096,472K of Swap, 60K is used, and 887,700K (42.34%) is cached.
My questions: Is my memory actually being 95% used? Or is the buffered quantity (20.35%) not a use of physical/virtual memory? (i.e. is it disk usage?) Does the amount of cached Swap influence the percentage of physical/virtual memory being used?
In other words, who is correct? Plesk says I'm using 40-50% of my memory, whereas top says 85-95%.
My server crashed last week and I'm trying to diagnose why. /var/log/messages contains the following error messages, which indicate that the server's memory peaked. I would like to discover what process caused the memory peak. Being that "httpd invoked oom-killer", can I conclude that httpd was the cause of the memory peak?
I have two ubuntu boxes. One is a 9.04 desktop edition and the other is a 9.10 server edition I am working on some code that needs to be highly tolerant of bad network connections. It sends transactions to a central database, but when the network is not available, it caches them locally to retry later.
I have the code working beautifully on my desktop box. but when I test it on this other box (the one running server edition) there is a HUGE DELAY every time it tries and fails to send a transaction to the database when the network is down.
I tested a little further, and I found that if i unplug my network cable and run ping somehost on the desktop, it fails instantly saying "ping: unknown host somehost" But if I unplug the cable on the server box and run the same ping command it lingers for about 40 seconds before the ping fails.
Does anybody have any idea why this might be happening? Is this a 9.04 vs 9.10 difference? Is this a desktop vs server difference? Is there some package I can install, or some config setting I can change that will make the server box insta-fail just like the desktop does?
I created a the class like this for shaping the packets with a specified bandwidth rate.....
tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: htb default 15 tc class add dev eth0 parent 1:0 classid 1:1 htb rate 750kbit ceil 750kbit tc class add dev eth0 parent 1:1 classid 1:3 htb rate 600kbit ceil 750kbit prio 0
For Our Requirement:-
I dont want to specify the bandwidth rate strictly like this rate750kbit ceil 750kbit,based on whatever speed is coming which should allocate the bandwidth rate for particular class...I need one application for finding the upcoming bandwidth & Is any other method is there for specify the bandwidth rate in a classes.
I'm wondering if I could get the sound peak level value at a certain time in decibels or percentage using the terminal through a command line ? Is there a feature for ALSA that could allow me to do that ? or is there a piece of software that runs in a terminal that could do that ?
I am looking to probably make a home server act as a backup for most of my data but also as a media server. I want to host all my music and videos for the most part on the server and then regardless what computer I'm using I could listen or watch. (Another question would be could I have itunes find my music on the server and play it).
But anyway, from people with home media servers, what kind of bandwidth usage do you go through a month? Comcast (ISP) limits me to 250GB and I'm thinking this is enough for moderate usage, I just want to make sure before I start the project.
how much bandwidth does the nxserver client use? Some places have reasonably priced mobile broadband but they limit the Gb usage. As I have a server with unlimited bandwidth, I though I could coonnect to it through nxserver and just use the server via the connection...
I'm getting DDoS attacks on my server, and I need to block all the attacking IPs.But for that I need to know which IPs are attacking me.I was thinking that I should log the bandwidth usage per IP so I can tell which IPs are using excessive bandwidth.How can I achieve this? I'm using Ubuntu 10.10.
So: On the VPS / Dedicated Server Linux wich 3 users created. How can I limit bandwidth each in a separate? For example first user speed 1 MB. 5 MB second and third 10 MB. Expect some clear answers. Regards, Silviu!
when the server is getting overloaded with users. At present I run the server mainly as a proxy server with about 100 users. The bandwidth at the data centre is 100Mbps connection with total bandwidth used last month = 17431.16 MB
I would like to add a VPN in future but feel that this might overload the bandwidth as instead of it just being web traffic it will the entire client TCP connections. I would like to monitor this before it gets to the stage where users are complaining but not sure how to gauge whether the proxy is being overloaded. It is used mainly for video traffic.
I run Debian 64-bit. I host GameServers on my machine. Yesterday, some corrupt files or error in configurations of one of the game-servers caused my whole system to destabilize. On checking, I saw one of the Gameserver's console giving Net_sendpacket spam errors. I disabled that server and things were fine then. It used up more than 100GB of my bandwidth in just 12 hours.
I deleted the server and copied all the files over again to fix that error. Now I want a prevention to this, if just in case it happens again. I want to limit a sub-user's bandwidth in Linux. Like if I want a user only to use 10GB bandwidth per month + not more than 5MB/second. Is there any way to do it?
I want to rent a (root) linux server to run a vpn service on it. I want to allow people to use this vpn.
My questions are as follows: - What kind of server/service should I rent - dedicated or vps? - Is one IP-Address enough to connect, say, 100 user? (I plan to run IPsec or OpenVPN, maybe PPTP) - What Bandwith and/or traffic limits I need to consider to make the service reasonably fast for the users? - Which Linux-distro should I use? Ubuntu Server, CentOS, FreeBSD, Debian etc? - How much RAM and HDD space is recommended for such an endevour? - Any advice on the processor type the server should have? - Is 100M network ok or better 1000M? - What means 100Mbps shared bandwidth in contrast to 10Mbps dedicated guaranteed per server?
Here is a mail in /var/mail/root which I received in my server logs [URL] I see same packages downloaded many times again and again. The servers which are upgrading are total 5 (4 virtual machines and one host) so is there a way I can save bandwidth on this sort of setup.
I wanted to implement a server for a small network, but am a bit in-experienced. The server that I want to use should be able to do load-balancing (two connections) and also act as firewall/proxy. And also it should be able to do some bandwidth management. The network that its going to serve has two parts. One part of the network should be served, say during day time,and the other during night time. The one that is going to be served at night-time should not have access to internet during day-time, but should have access to, say local mirror-server. I am a bit confused what software/hardware to use. I am planing to use EndianFirewall, but since I don't have experience, don't know if it can do all that I need (?).
I'm looking for a program that I can use to keep track of how much bandwidth goes to the various computers in my small network.All of the bandwidth goes through my squid server, so the easiest would be to just have a program that can accurately analyze the squid logs and tell me how much bandwidth is going to the different computers.I've tried both "bandwidthd" and "calamaris" I can't figure out how to get either one to actually do anything in Ubuntu.
My server has been the repeated victim of bandwidth attacks: any large file on the server is downloaded repeatedly, with the goal of pushing the server over the provider's bandwidth limit. How can I lessen the effect of these kinds of attacks with IPTables or APF? For example, can I set the server to: Is this possible? Is there a more effective way, and can a firewall even do this? My web server is Lighttpd, perhaps I can place such a rule directly in its config?
i Have a Squid Server , i'm Using That for Caching ... i Have 3 Ether on My " Squid Server ". Ether1 : Directly Internet From ISP1 , 2Mbps . Ether2 : Directly Internet From ISP2 , 512 KBps . Ether3 : Connected too LAN . i want All The Files Format with " MP3 , RAR , ZIP , AVI , ... (All Downloads File) " Get From Ether1(ISP1) and WebPages Like " HTML , ASP , CGI , & ... " Get from Ether2(ISP2) i Not Know How to Configure That with My 2 ISP internet.
I have a UBUNTU server 10.04 LTS with 3 network interfaces (eth0,1,2) which eth0 is connected to my lan and others connected to two different ISPs , I would like to know is there any way to share bandwidth of this two ISP for my LAN , I mean for example if eth1 has X MB bandwidth and eth2 has Y MB bandwidth my clients those who use download manager for downloading file from internet has X+Y MB download and upload bandwidth.I do not want just limiting each user or service to use one of those interfaces I want to share them for all to increasing my internet bandwidth
I'm trying to check my server's bandwidth usage in real time, installed the following programs but none worked so far.
Iptraf - No results even when using iptraf -u Tcptrack - Error : pcap_loop: cooked-mode frame doesn't have room for sll header Iftop - No results, everything 0b
Are there any programs that displays bandwidth usage in real time and actually works on VPSes? Or getting real time bandwidth usage on a VPS is simply impossible?
Does squid automatically split bandwidth between connected clients? I'm wondering if someone was downloading a lot of data and someone else connected whether it would split the access 50:50 between them? I have 1 user that is using a lot of bandwidth but the server doesn't seem to split it up between all connected clients so others are receiving slow access. I don't have this client's IP address but I do have ncsa auth connected. Will delay_pools work with an ncsa username?
We have a sipmle office network set up that we also use use to connect to the internet, however of late the number of users has increased thus slowing internet access. Bandwidth upgrade is not an option thus i have to do bandwidth shaping on our linux router. The question is how do set the squid configs to allow certain IP's range a certain percentage bandwidtheg 60% and furthe divide the rest. Alternatively how can allow certain IPs to have higher bandwidth access.