Ubuntu Servers :: Network Slowdown On High Torrent Uploads ?
Jun 24, 2011
I have a home server based on Ubuntu Linux 10.04.2.
Hardware:
Motherboard - Asus AT4NM10-I (Intel NM10, PCI)
CPU - Integrated Intel Atom D410
RAM - 2 Gb
Lan - D-Link DGE-528T Gigabit Adapter
Provider gives 8/2 Mbit ADSL connection.
So tried Deluge and Transmission, and integrated or external network card and no luck.
When torrent file is being seeded on top speed network starts freezing, server almost unreachable, video freezing when watching it by LAN from server... etc...
When I pause upload - everything starts working ok!
Network based on gigabit switch and cooper UTP cables...
So found one issue: When torrent upload speed reaches peak speed (160-200 Kbytes/s) huge read slowdown happens. Server becomes almost unreachable... It allows to connect via putty but it takes a lot of time.
Tested top stats during those lags (Deluge, Transmission) - 10-15% CPU usage.
So I think the problem is in LVM and not in CPU.
How is it possible to find weak place in system to avoid those lags... Cause if torrent is seeding it's impossible to watch movies through network form that server.
I have quite fast Internet connection, 100mbit, and I'm able to take advantage of the entire bandwidth that I'm paying for. When I, however, use Transmission as torrent client and download a torrent faster than, 7-8 Mbyte/second, my hard drive is spinning all the time and my desktop becomes sporadically unresponsive, load average gets high and I'm pretty much sure the Transmission-application is the cause to this, somehow. It must be some strange way it cashes stuff...I don't know.
I'm, either way, not experiencing anything like that with any other torrent client in Linux (or Windows for that matter). It's not that I'm tied to Transmission, in fact, I prefer rtorrent and use it anytime I can, it's just that some stupid torrent sites are giving me Transmission as the only client option when I'm using Linux, so I have to stick to it those times. I have quite fast system, Core Duo 3ghz, 4gb ram, 500 gb 7200rpm 16MB cache WD hard drive, etc...so the hardware certainly shouldn't be a problem.
Anyone seeing a dramatic slowdown and "PAGE NOT FOUNDS" after the latest updates?
It even happens on LAN activity. Peak speed is excellent, the ability to resolve both local and WAN addresses is very spotty.
Machine is a newer Clone with AMD Phenom II dual core with 2gb running 10.04 32-bit with Gnome. Network is Gigabyte MoBo 1000-T ethernet hardwired to Dlink router into cablemodem.
XP and Win7 aren't affected.
It was ripping this morning, but right now, is pretty much crippled.
I have a server and I have a few computers connected to it via a Airport Extreme. Using network cable. So when Im uploading,(ftp) IE using a lot of the network "space" the other computers on the network gets kicked out. So what is going on? My Airport Extreme is doing fine, but my other clients just get kicked out. If I pause the upload, everything is okay again. The whole network is 1 gigabit, clients, everything.
In my /var/www directory, I have everything set up with: user: www-data group: developers directories: chmod 570 files: chmod 460
Everything seems fine. Users from the developers group can edit files and all, but now we began using the Git repository, and whenever a user edits a file (ie. Joe who is a developer,) file permissions get screwed again. Now they're: user: Joe group: Joe directories: chmod 755 files: chmod 644 How can I fix this so permissions remain the same?
We are a small company running half a dozen servers in data center.Recently we got charged heavily for over-utilizing the data transfer. So,we are looking for a way to find - uploads and downloads per ip and port basis.We have mixed environment (Win2008/Ubuntu) so the tool should be able to work for both.I am not sure if MRTG provides per port(i.e. application) based analysis.
I have a web server in my kitchen with apache running on it. Since the upload speed is quite low due to my isp I would like to execute a bash script that uploads a file to another server through a website (which is htaccess protected) The idea in general: Someone with access to my website browses through a folder, copies a file path to an input form and presses "upload". Rather than executing a bash script directly I could have a cron job running in background that finds the path and then uploads the file to the other server I have userspace on and is accessible via sftp/ssh. The file would be than erased later after a couple of days or so. That person would be able to access the file with higher speed some time later without logging in via ssh and doing all that manually.
I use deluge and it worked for a while then randomly I got this error message when trying to download a .torrent file:"/tmp/Manchester_Orchestra___I_m_Like_A_Virgin_Losing_A_ ___-1.torrent could not be opened, because an unknown error occurred.Try saving to disk first and then opening the file."
I've played with ubuntu for quite a while now and i picked up a atom core mini pc for cheap so i thought i'd make a hobby in setting up a simple server to store files on, access files on my xbmc enabled xbox and download torrents whilst i'm at work though the torrents can wait for future projects though i installed ubuntu server 9.10, i'm aware it's CL only, anyway thus far i've managed to set up the ipaddress of it and make it fixed i'm not sure of what to do with hosts at the moment, reading on it isn't making much sense of it's purpose or layout so i've left it as is i permenently mounted a fat32 partition to /media/stuff and changed permissions to 0777 only have one user on it, myself installed samba smbfs smbclient and an openssh server, and can do all the terminal stuff from my normal pc my current issue lies with samba, with gnome desktop i've never had TOO many problems with sharing folders, however i'm stuck where to proceed in regards to editing smb.conf as there's a lot of options, some of which i'm not sure i need
- I've changed the workgroup to home - under authentication i have security = share - i added the following section
Code:
Anyway on my windows xp pro machine, i can access \thork which is the machine and i see 'media-stuff' which is a start i guess, but im refuesed access automatically.
I'm trying to blind deluge torrent to port 80 but as I'm running already a server on this port, I've decided to use the ProxyPass option in a vhost. As I prefer running through HTTPS, I've used my vhost 443 that I already use for bind AjaxTerm (SSH with a web interface)
But whereas AjaxTerm works, Deluge doesn't...I only get a black page but the tab name is correct (ie, Deluge: Web UI 1.3.1)
Here is my vhost
Code: <VirtualHost *:443> SSLEngine On SSLCertificateFile /etc/ssl/private/localhost.pem ProxyRequests Off
When I start downloading torrent via transmission, utorrent, deluge every time I get disconnected network which can be fixed with restarting DSL modem, I've never had the problem in Win7 an WinXP.
When downloading a torrent after a few minutes my connection speed stops. Browser also. Same problem with Ktorrent and Deluge. The only way to solve the problem is to reconnect to my WLAN. I use a TP Link WR841N wireless router and a Toshiba satellite pro. The same setup is OK with windows and ubuntu 11.04 alpha3.
Core 2 Duo E4600 2GB DDR2 RAM (1 stick) Intel ICH10R based motherboard (tried an ICH9R aswell) 4-port SATA controller (PCI Sil 3114) O/S: Ubuntu Desktop x64 10.04 LTS (using 'desktop' because I like having a remote desktop)
The Storage Setup Disks: Assorted selection of 9 disk. 750GB, 1000GB and 1500GB Seagate and Western Digital disks. The disks are joined through a standard LVM2 configuration. I don't know the LVM term, but normally you'd call it a JBOD setup. On that LVM device, I've put a cryptsetup device, made with the LUKS tools (aes-xts-plain 256) On the cryptsetup device, I've created and mounted an EXT4 partition.
All in all, a completely standard LVM2 and LUKS setup, running EXT4. After a reboot, I proceed to unlock my cryptsetup encryption device, and then mount the EXT4 partition. All is well, the mount is accessible and everything looks fine. I then try to send a file to the mount, via Samba. After a few hundred MB written, the I/O wait goes berserk. It stays at 50% (dual core setup remember). The system becomes unresponsive to network commands (can't browse samba) for about 5-10 minutes. When it finally responds, the I/O wait is gone and everything is now fine. I can write and read hundreds of GB's of data without any issues at all. I can benchmark and stress all disks perfectly fine and no logs are showing disk errors.
I tried monitoring my disks with 'iostat -d 2' while the I/O wait was happening, and there is some slight Blk_read/s activity on 1 disk at a time. First for example /dev/sda is showing a little Blk_read/s acitivty, then it jumps to the next disk, and when every disk has show that slight Blk_read/s activity (500-800 or so) the problem is gone and the I/O wait is no more. I've tried changing motherboards, switching disks around on the controllers, checking individual disks, replacing disks and I've tried different versions of Ubuntu. The problem however persists. I could see it being a network issue, possibly a driver issue. But since the NIC is a standard RTL8111 on-board it seems unlike that the problem wouldn't be more widespread since this NIC is litterally being used everywhere. I did change my motherboard, so a faulty NIC seems unlikely twice in a row.
i am learning to using ubuntu as my server and learning using vps too
now i getting consfuse about my server memory usage i just have 3 sites , 1 blog site and 2 company profile but apache memory usage is more than 300MB and total of memory use in my server is more than 500 MB (maximum 512MB burst memory)
i am using drupal for my website is this normal ? because in last week, memory consumption in my server no more than 380 MB
I upgraded webserver to new ubuntu server 10.04 (x86-64). After upgrade the increased load from 0,3 to 1,4. On webserver running phpbb, which generating slow quieres, which not before upgrade to lucid. HW conf: Intel Core i7, 8GB ram, WD Raptor 10k rpm. Week17 upgrade to new version.
I am running latest apache2 available in the lucid repos on my desktop. All packages are updated as of this moment. Now in the root of my web server I have placed several soft links that point to folders on another ext3/ntfs partitions on the same disk. When I try to download any large file (say above 500M)on this server using firefox, when the 'save' window appears, my desktop freezes, I notice very high cpu-ram-disk usage, even though I have not yet clicked on 'ok' to save the file. This issue is not present when the file size is small. Note that firefox and the webserver are running on the same computer.
Also I have tried nginx and lighttpd and the issue is present there as well. When I tried downloading the same files using Internet Explorer 6.0 using a XP VM the issue is not present. However on Windows as well using Firefox the issue recurs.
I am having a problem with the server that I use to host my personal site. The load average quite often spikes to exceed 1.00 for the 1 and 5 minute intervals, and the 15 minute interval gets above .5. This occurs while the server is idle, serving very few or no requests and with the CPU 99% idle with <1% IOWAIT usage. I have checked top and vmstat, but neither one provides any useful info. Top continues to say the CPU is 99% idle, and vmstat says that there are 0 runnable and 0 blocking tasks. Occasionally, vmstat will say that there is 1 runnable task, but this doesn't even coincide with the load average spikes. I have already searched for other solutions to this problem, but everything I have seen says to use top and/or vmstat, but those aren't showing anything out of the ordinary. Can anyone recommend anything else I might do?
My server has a Pentium 4 HT 3gHz processor, 2GB RAM, and runs Kubuntu 10.10. (The reason it runs Kubuntu instead of Ubuntu Server is that it needs an X environment so that the Nvidia driver can initialize and put its graphics card into a power saving mode.)
top says there's only 12MB free (out of 1GB), but I can't figure out what's using all the RAM. rtorrent is using 13MB, and the rest are in bytes. (ran top as root)
I have an Ubuntu server running in our small office. Among its many duties is report generation. It uses PHP and DOMPDF (a PHP library for converting HTML/CSS to PDFs for printing). PHP's default memory limit of 32MB is not even close to being enough to pull large amounts of data from the database and generate images/tables/PDFs with that data.
I increased the memory limit to 64MB and that is adequate for reports under 3 pages or so (varies based on table complexity, images, etc). If any user tries to generate a report longer than that, PHP just throws a "out of memory" error and doesn't make the report.
My question is: what are the possible consequences of increasing the memory limit yet again to 128MB or maybe even higher? The server isn't terribly powerful. It has 2GB RAM and 4GB swap space. I know that isn't much but this is a small office and at most I can only see two or three people trying to run reports at the same time. As for security, apache is currently only serving pages in the local network, but sometime within the next year I'll probably have it hosting a public website (currently using a hosting service). Is a high memory limit a potential security risk when exposed to the internet?
EDIT: Sorry, PHP's default memory limit is 16 not 32 as I said. Question still stands, however
Using Ubuntu server 10.04.2 64-bit all up to date.
I am running multi-threaded processes. These use OpenMP in my own code and the multi-threaded ACML maths library. When run in the foreground, everything is fine i.e. if I have set
export OMP_NUM_THREADS=8
then when I start all 8 cores are in use and things whizz along. However, when running overnight and logged out using e.g. 'at now + 1 minute' then the command, I am only getting about 130% CPU and it slows down accordingly. I have tried renice'ing and calling from within a bash script in case sh is doing something odd but nothing seems to solve it. I am sure that in the recent past this wasn't the case.
The libraries being used are shared versions in case that might have any bearing.
I run a website that a very steady flow of traffic and Im seeing recent issues that I just dont like. Server is 10.04.2 on a supermicro i7-950/6gb RAM with two 500gb Samsung F3 drives in a software RAID1 (1x5400, 1x7200) and for several weeks, its been running very well. Recently, Im seeing the server hang for 5-20 seconds. IOwait goes through the roof, nothing can write to the disk. Apache logs stop, redis fails to rebuild caches, mysql errors and then it continues and moves back to normal operation
/ is ext4, the kernel was 2.6.32-server-x64 but since updating to 2.6.38-server-x64, the issue has dropped from maybe once per 10 minutes to once per 15 minutes. 3 IOstat copy/pastes show this when it hangs.
Code:
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
[code]...
No smart errors or smart diags show any issues with any of the disks and kernel.log shows near nothing other than a process hang, 120 seconds about 5 days ago.
I am trying to figure out why the remote X performance of our RedHat 5.3 system is so bad. We have tried using X (Gnome session) from several different X Servers (Windows Xceed, Windows XWinPro, Linux Xnest - both Fedora 11 and Centos 5.3 and Mac OS Xnest) and the system is barely usable. I have monitored the network traffic on the RHEL system and it goes up to 6MB/s at some points, which seems a bit too high for X net traffic?I have disabled ipv6 and any ip_tables modules and that has helped a bit, but it's still not very good.I suspected the network hardware and driver, but I cannot see how that would cause network traffic problems.I wonder if there are any X Server network settings I might check, or whether trying XFCE would be a better option over Gnome. If so, do I have get the xfce group from a CentOS Repo, or is there something better suited to RHEL?
so my server is doing fine, but there is one odd thing I would like to fix. When I start and stop services, the CPU is maxed out for about five seconds each time. The services start the same speed, but it still does this. Small things like lm_sensors don't do this, just things like httpd and sendmail. THis server was upgraded to Fedora 11 with a netinstall CD a few months ago.
I have been wanting to increase the fps rate of my current game servers, and I need to have a custom config for this cause I'm only hitting 500. I have, attached, a custom kernel config and not to sure what to do with it at this point. The file name is config-2.6.24-zen4-lld.no-po.2000hz
# # Automatically generated make config: don't edit # Linux kernel version: 2.6.24-zen4-lld.no-po.2000hz # Tue Nov 25 22:54:23 2008 # Zen Options # Kernel Tunables # CONFIG_ZEN_SERVER is not set ..... # IO Schedulers # CONFIG_FINGERPRINTING is not set
I have problems with my network speed when i ping my proxy server I end up getting a high packet loss generally more than 30%.I have tried to use various network monitoring softwares like etherape, wireshark, tcpdump but I am not able to get to the bottom of the problem.basically I am trying to find out where the lost packets are going.
i recently caved and upgraded to ubuntu 11.04, despite my hate of the unity interface. anyway, i seem to be having completely random, massive lag spikes! seems to happen when i do certain things, such as spend to much time on videos, or stream hd videos. playing games, such as minecraft, and using skype seems to be giving me the same issue. i don't know what to do! i haven't found anything similar to this in my google searches,