General :: Check Server Bandwidth Usage In Real Time?
Mar 21, 2010
I'm trying to check my server's bandwidth usage in real time, installed the following programs but none worked so far.
Iptraf - No results even when using iptraf -u
Tcptrack - Error : pcap_loop: cooked-mode frame doesn't have room for sll header
Iftop - No results, everything 0b
Are there any programs that displays bandwidth usage in real time and actually works on VPSes? Or getting real time bandwidth usage on a VPS is simply impossible?
I have one dedicated server in godaddy. Now I got mail regarding overage bandwidth. I don't know how to check this and I must give report how its happen.
I come from a windows world where there's a magical tool called netlimiter that allows me to shape bandwidth and watch upload and download traffic: And easily check stats: I wonder if there's such a beauty for linux?
I'm getting DDoS attacks on my server, and I need to block all the attacking IPs.But for that I need to know which IPs are attacking me.I was thinking that I should log the bandwidth usage per IP so I can tell which IPs are using excessive bandwidth.How can I achieve this? I'm using Ubuntu 10.10.
I am hosting two Virtual Servers both running Centos 5.3 on a host machine also running the same OS. The VM software in use is Xen, as supplied with the OS.The host machine's time and date is fine, however both Virtual Servers are running ahead of real time consitantly.Running /etc/init.d/ntpd restart will resolve the issue however one of these is running MailScanner and when the time suddenly goes backwards, sometimes by as much as an hour, it stops working properly.
Recently our houses internet usage has skyrocketed and i'm trying to keep an eye on it using bandwidthd (installed on the server). It installed fine and counts traffic fine, the problem is we have a server that hosts our music/movies/etc and when people pull music off the server it skews their bandwidth usage. I'm looking for a way to exclude all traffic on the local network and only count internet traffic. This is the filtering part of the config file
I am looking to probably make a home server act as a backup for most of my data but also as a media server. I want to host all my music and videos for the most part on the server and then regardless what computer I'm using I could listen or watch. (Another question would be could I have itunes find my music on the server and play it).
But anyway, from people with home media servers, what kind of bandwidth usage do you go through a month? Comcast (ISP) limits me to 250GB and I'm thinking this is enough for moderate usage, I just want to make sure before I start the project.
what the recommended way to set up real-time (or near real-time) folder synchronization among 2+ servers. I looked a rsync but that doesn't sound real-time and it looks like its something that you might put in a cron once an hour.
I'm trying to create a script that will find the bandwidth usage of certain protocols only. For example, SMTP. I would like it to just return a number. Is there a known command/parameters to output something like this?
What I want to do is to write a script that gathers some information (like cpu temperature and bandwidth usage) and logs it into a file. I can't figure out how to get a single sample of the current used bandwidth: I've found that there's plenty of tools to get this information from command line, but the majority of them are curses based, so I can't take their output to put it into a file. Among these I've found bmon, that has a nice ascii output. The problem is that this output is updated constantly, while what I want is a single "sample" per program call.
Is there a way to get this done with bmon or someone knows another program to accomplish this task?
I am currently using curlftps to mount a directory on a ftp server locally as /backup , I then use rsync to do an incremental backup to this directory every night and a full backup at the weekend.A requirement has arose for a similar set up but one that syncs in real time, so if a user puts a file in a directory it immediately copies that file to my ftp server, in this case it immediately copies it to /backup
I have 2 servers each one with a RAID and I want them mirror they data so if one of them goes down the other one take the job with out disruption. I've heard of multipath by I want to know it in detail or learn of more options.
Is there a way to check which IP is using the most bandwidth at any one time? I have a proxy server running and occasionally some users download videos instead of stream them, which hogs the bandwidth on their connection and denies other users access.
I would like to connect to Linux Server remotely over LAN in graphical modeBut I need access for several users in real time. Everyone must have its own desktop.
how to check which process consuming a lot of HDD I/O ? Do You know any good command which can show me which process saving something big on the storage system ? "iostat" or maybe "ps" ? Would be great if somebody could past me here nice command.
We have a production web site running apache 2.2.3 across several web servers. we also have a major problem with SPAM comments right now. our method of identifying valid IPs (whether by external clients/customers, or internal personnel) vs SPAM'ers is not ideal - its prone to erroneously labeling legit IP's as targets to be blacklisted.
What we need is.. a way to see how much distinct request traffic is coming from any given IP address to the site in real time (or very near realtime). Essentially we want to see in some graphic/chart way requests per sec to apache / per ip sorted by requests per sec.Would nTop do this? I've only used this in a limited form at a branch office, not on a production web server.
This script puts a natural number 5 times a second.
3. Then in the second bash window I type (as root):
Code:
The script test2 looks as follows:
Code:
While true; do true; done
During the following 15 seconds test2 is the process with the highest real-time priority. As far as I know the script doesn't perform any system calls so it shouldn't be suspended even for a minimal timeslice. My question is: why the process test1 manages to put a few numbers on the screen before test2 stops. I thought that test2 would exclusivly own the processor for 15 seconds.
I have to write one Shell script where i have to find one word in current generated log.Log name has specific format like 'NAME_DDMMYY_HHMMSS'.log.Each time i have to go and check the word in newly generated log.How can i pass the newly generated log name in my Script?
I wonder to know the command or the procedure to get the overall CPU utilisation in linux. I have used top, iostat, mpstat but the outputs are not the way i needed. Is it possible to get the output like...
I am running Slackware 13.0. I am aware of free -m, vmstat, top, etc. However, none of these programs display how much ram each program is using. Is there a program that displays how much ram each program is using? I run a headless so I'd need a program that runs in CLI.
I know that top command shows %MEM (only two programs were using 0.1%MEM) but after running free -m I only have a total of 400 MB ram left out of my 1.5 GB of ram. Where is all that lost ram?
I have a log file that I would like to examine during some changes under process that writes to this log. Is there some way to open this file and read in real time changes written to it ?
Linux OS: Ubuntu 10.10..In Windows, we can use Device Manager to check the power usage of my USB devices (like my WiFi adapter use 500mA). How to check this under linux?The purpose of asking this question is just to learn.
I am looking for web base real time iftop like tool for linux.I mean it shows current active connection on a NIC of any Client that connected to it .I do not want offline data I want realtime data for current connections on web.
I know this command exists I just can't seem to find it. I want to see the last few lines of a file as more are added in real time. Can someone point me in the right direction?
I was wondering if there is a command to show a real-time creation of files. I basically executed a command that will created thousands of files and takes a long time. I want to check if it is still creating additional files or if ti got frozen.
I have a few servers that are exposed to the internet. When someone tried to brute force hack in to the ssh, ossec adds their IP to the hosts.deny. Then the hacker (read: script kiddie) moves to the next IP up the line and hits my next server, etc, etc.
I end up getting 20 emails for all the servers that they hit.
My question, is there anyway to sync the hosts.deny file across multiple servers so that if they are locked out of one, they are locked out of all?