Software :: Dynamic Bandwidth Throttling For SFTP-uploads?
Aug 20, 2010
I am uploading the incremental backups using duply/duplicity using the sftp-module. As the initial upload is pretty big and runs several days (more than 50GB over a 1Mbps-line) I am confronted with the problem that other users in the network experience slowdowns when I upload.
I would like to run a script every n minutes which pings a host in the internet (second hop of the traceroute for example). If the response time is less than a value (150ms), the script throttles the upload for one specific host and protocol. Traffic to the local net (Samba mainly) should be unaffected. I cannot use the QoS of the firewall/router. Also I would like the penalty to be removed if the ping is quicker (loess than 70ms for example) I looked at trickle, and some other out-of-the-box shaping tools but they do not give me the possibility to change the rate while the upload is running.
I would now write a script in perl which uses [URL]some wrapper for iptables combined with some ping module [URL] Also I was trying to get the proof of concept before I start coding: (I haven't verified if this works yet)
sudo tc qdisc add dev eth0 root handle 11: cbq bandwidth 100Mbit avpkt 1000 mpu 64
sudo tc class add dev eth0 parent 11:0 classid 11:1 cbq rate 100kbit allot 1514 prio 1 avpkt 1000 bounded
sudo tc filter add dev eth0 parent 11:0 protocol ip prio 16 u32 match ip dst MyserverIP flowid 1:1
I am trying to limit bandwidth of certain ip addresses on my server. I have been doing hours of reading and not getting very far...
So far I believe the iptables command is:
And now I just need the tc command to read those marks and limit bandwidth, I have a gigabit connection and would like to limit each of these ip addresses to 10mbit in and out.
There seems to be many different ways of controlling bandwidth usage of downloads of content from Apache2. Does anyone know which is the standard module deployed/deployable in OpenSuse?
I'm going to set up a new linux router for a company, and have to set up bandwidth throttling. They have an unlimited ADSL internet connection which will be shared between 2 businesses, one being them. I will need to set it up so their connection will never be slowed down by the other business. They will both be connected to the same NIC, but will be on different subnets. How would I go about doing this?
I have this strange problem which I am unable to web search on and not sure what to do next. My Linux knowledge is between basic to intermediate but I know how to troubleshoot general hardware problems.
My problem is that Ubuntu 9.04 Jaunty 64-bit hangs while SFTP is active and dynamic IP changes. For example, I SFTP into my home server and transfer file then suddenly my ISP decides to renew my IP and give me a new IP while my SFTP client is still uploading files to my home server. This causes my SFTP client to stop working. Upon checking, my router is still running with a new IP lease from my ISP. My Linux box still powers on but typing anything from the keyboard does not make it "wake up" and put things on the monitor. Nothing seems to make it respond and the only way is to get about it is to power off and on. During that time, you cannot SSH into the server as there is no respond. SFTP into the server is not possible too because connection fails.
The server has all new hardware, latest BIOS, etc. Memtest86 shows no errors after running for more then 5 hours. I am unable to find anything out of the norm in /var/log/kern.log or in dmesg. All hardware seems to be working.
When I think about it, I tend to think OpenSSH (probably that is the default package in Jaunty) is causing this system hang whenever there is an interrupted connection from the outside world. However, I fail to agree with this is because I am sure the daemon and Linux can tolerate this situation without resorting to system hang. FYI, I have installed vsftp as well but this should not be a problem.
I want to restrict the Visitors to my Webserver whom i want to give access But the persons whom i want to give access. have Dynamic IP. I want to use DynDNS and update IP address of person. Based on the Hostname Pointing to Dynamic address of person.
I created a the class like this for shaping the packets with a specified bandwidth rate.....
tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: htb default 15 tc class add dev eth0 parent 1:0 classid 1:1 htb rate 750kbit ceil 750kbit tc class add dev eth0 parent 1:1 classid 1:3 htb rate 600kbit ceil 750kbit prio 0
For Our Requirement:-
I dont want to specify the bandwidth rate strictly like this rate750kbit ceil 750kbit,based on whatever speed is coming which should allocate the bandwidth rate for particular class...I need one application for finding the upcoming bandwidth & Is any other method is there for specify the bandwidth rate in a classes.
As a Windows user, I generated a pair of DSA keys from CoreFTP Lite and sent it to a third party that runs an SFTP server. They told me that a valid DSA key needs to have ssh-dsa at the start and the username@systemname at the end. CoreFTP generated neither the ssh-dsa header nor the username@systemname footer. I tried with WinSCP and it didn't generate them either. Is there a difference between how SFTP works between Windows and Linux? If I put a useraccount@systemname at the end of the text will it work? How would the Linux system validate that my system is called "systemname"? If it can't validate, what is the purpose of adding it?
After being forced to rebuild my computer (failed drive) I finally got current and installed F11 (was using F9). I was watching GKrellm last night and noticed something odd. The CPU frequency on my processor (Phenom 9600) was scaled to the slower speed and was rarely being pushed to full speed.
I verified that cpuspeed was in ondemand mode and ran some processor intense tasks (huge image loads, large file cats, large yum installs, etc...). Only when I was able to sustain near 100% load would the system throttle to the full clock speed. It would then drop back to the slower speed very quickly even though the process was still running. As a rough benchmark, I ran these tasks again in performance mode to see what impact this was having. Most of these tasks were taking as much as 40-50% longer to complete in ondemand mode.
Digging further, I found that the default up_threshold in F11 is set to 95%!! This is verified by cating /sys/devices/system/cpu/cpu0/cpufreq/ondemand/up_threshold. This means that the system would not throttle to full speed unless a 95% load was sustained for multiple samples (36 miliseconds each by default).
I overrode these settings in /etc/sysconfig/cpuspeed. I change up_threshold to 60 and down_threshold to 30. I am at work right now, so I can't benchmark this change until tonight. I guess I could have just set it to performance mode and left it, but I'd rather save the power when the machine is idle.
Does anyone have any thoughts on why 95% is the default and if there is any problem/benefit to changing this to a much lower value? It seems that anyone with a CPU running SpeedStep or CoolnQuiet would suffer the same severe performance impacts I saw with the default values.
I'm trying to install ATLAS which requires disabling cpu throttling. Normally one can do it in the BIOS, while there isn't such an option of my dell inspiron 6400. Actually there is a SpeedStep option in the BIOS, however, you cannot get the highest but the lowest performance by disabling it (dell!!!).
After googled a lot, I found in some distros, there is a /usr/bin/cpufreq-selector through which one can disable throttling. But it doesn't exists in slackware.
I known that by appending kernel option "acpi=off", one can disable the whole acpi and thus the throttling control in slackware, but it seems so dirty.
Anyone known a better way to do it?
ps: With a intel T2050 cpu, i didn't find the directory /sys/devices/system/cpu1, but only the /sys/devices/system/cpu0. Similar case in /etc/acpi. it seems that slackware treats my cpu as a single core one, while it is not.
We have a sipmle office network set up that we also use use to connect to the internet, however of late the number of users has increased thus slowing internet access. Bandwidth upgrade is not an option thus i have to do bandwidth shaping on our linux router. The question is how do set the squid configs to allow certain IP's range a certain percentage bandwidtheg 60% and furthe divide the rest. Alternatively how can allow certain IPs to have higher bandwidth access.
Is there a way of throttling a process resources, something akin to limits but for processes not users?ie I want processX to be restricted in the amount of memory it can consume. For process cpu I guess I can simply nice the process, but total memory consumption is my primary concern.
I'm not getting file transfers that utilize the max bandwidth of my ADSL broadband. I have a 512 kbps line (I know that's not broadband in most places, but such is the option available to me), and I usually get download speeds of 55-60 kbps. Since the past couple of days, this has dropped to an abysmal 2-3 kbps. So I used [url] to test my download speeds, and they were as expected (i.e. the usual 55-60 kbps). I also used the speed test page of my ISP, and that too, gave the usual results. There has been no throttling from the ISP, as I confirmed from their help desk.
Also, web pages open fine. It's only when I'm downloading a file (Yeah, web pages are also files, but what I mean is compressed archived files; .rar, .tar.?g* et.al.) that I can't get the desired speed. I haven't changed the resolv.conf file, not have I made any other changes that might cause this. I use ppp to dial-up to my ISP, and use the pon and poff scripts that ppp provides. I have a peers file configured for my ISP, which again, I haven't edited.
I have a server and I have a few computers connected to it via a Airport Extreme. Using network cable. So when Im uploading,(ftp) IE using a lot of the network "space" the other computers on the network gets kicked out. So what is going on? My Airport Extreme is doing fine, but my other clients just get kicked out. If I pause the upload, everything is okay again. The whole network is 1 gigabit, clients, everything.
I have a site that users upload files on. Its on a dedicated server with 2 HDDs and the first HDD is 97% full, is it possible to use the other HDD for the files users upload? if so how?
I'm thinking about some ways to limit access to my web-server. It runs Nginx and php in FCGI. The server contains a large amount of information. The data is freely available and no authentication is required but other companies might like to mirror it and use on their own servers.
The requests could be limited on different levels: IP, TCP, HTTP (by nginx) or by the php application. I found some solutions (like Nginx's limit_req_zone directive), but they do not solve the second part of the problem: there's no way to define a whitelist of clients who are allowed to use the data.
I thought about an intellectual firewall that would limit the requests on IP basis, but I'm yet to find such device. Another way was to hack some scripts that would parse the log file every minute and modify the iptables to ban suspicious IPs. It would take days and I doubt this system will survive, say, 1000 requests per second.
Perhaps, some HTTP proxy, like Squid, could do this?
Sometimes I notice that there is high upload speeds for 10 minutes or so. At the time of the screenshot I was sitting in a public wireless place, only chromium was open and I don't see any reason why there should be sustained upload speeds.Is there a GUI or CLI so I can find out which process uses the internet?
All my music is already synced, but every time a song finishes playing in Banshee, a notify OSD message appears letting me know that song is being uploaded to my Ubuntu One account. I'm running Ubuntu 11.04 32 bit.
I have Debian and want to be able to connect to an ftp server, download some (or all) files, disconnect from this server, connect to another ftp server and upload everything on it. (And delete the temporary files on my PC). This should be done form the command line. I am no expert in linux (although I am acustomed with it). How to do this (or part of the solution ). In the end I would like to write a script, that mirrors my site from 1 place to another.
What is the least painful way to temporarily prevent uploads to an FTP server by certain accounts? they all only upload directly to their home directory setup in /etc/password
I upload picture files to an ftp server. I can't do much about my upload speed but I think that a multi-thread upload may yield the same kind of improvements that a multi-thread download yields.
I am working on a PHP enabled webpage that will allow a user to select multiple files and directories to upload from a local machine to an ftp server. I am comfortable with uploading the files from the machine to the server. The problem is making it easy to select all the desired files. What I would like to do is create an expandable file tree that lists all the directories and files on the local filesystem. From there, the user should be able to select directories and files using checkboxes. Upon clicking submit, all of the selected files should be fed into an array of files that can be sequentially uploaded to the ftp server.