Debian :: Fast Bandwidth And CPU But SLOW X11 Forwarding
Sep 26, 2010
I have a fast server running Debian 5 (I tried to upgrade and everything broke.
So I'll stay with Lenny):
Core 2 Duo
When I use the -X command and try and run applications to my desktop, it is VERY slow. Firefox takes 20 seconds+ just for a right-click menu to appear. It is completely unusable because of this speed. And this is after a clean install of Debian 5.
I've tried on multiple clients (Windows and OS X) and it is always slow! Even though both computers and connections are fast. My home connection is 100mbit too....
So the problem is not bandwidth or resources, the problem must be with the server/software? Any ideas why tunneling X11 applications are so slow? Is there an alternative X11 software I could use on the server?
1 thread in FTP to this server gives 10MB/s (100mbit). So X11 should be fast? And btw, I'm tunneling through SSH.
My wireless seems to be fast for a good 30secs then bang takes good while to load the next page almost as if it's disconnecting and then reconnecting/scanning reconnecting. Why cant it stay connected. I have WAP PSK security here is my network setting please let me know if I should change any of them:(side not is there a way to fix this problem occuring so frequently it says on the wiki that it should only occur once in a whilce https:[url].....
Like the title, that is my problem. I'm using cable modem with 3mbps. It is Linksys Docsis, wired networking. When I download a file using ubuntu 9.10, my latency time (ping time) sky rocketed to 2000ms average, while in XP none of this happens; I tried to download the same file from the same website. In ubuntu 9.10 my network setup will be, well... nothing. I'm using DHCP automatic detection. Same thing in XP.
I don't want to go back and forward using XP for the internet. Funny thing though, none of this happens while I first started ubuntu 9.10. I didn't configure anything with networking. It is the way it is from the beginning. The only thing I tried was ubuntuone. But that like months ago, and the problem I have with slow internet connection only appear this March.
I'm using wlan and I'm behind a NAT firewall. Downloading with BitTorrent can reach speeds of 1,5MB/s, but browsing is incredibly slow. Chromium/Firefox seems to hang forever on "connecting.." or "waiting for". Sometimes pressing the reload button will speed it up. I have two Ubuntu boxes and they are the same way.
I don't get it, if downloading is so fast, how can browsing be so slow. Yes I have disabled ipv6 in grub and in Firefox, don't know how to do it in Chromium. Have 64-bit Ubuntu. Cnet wlan card using rt61pci module.
I have two Ubuntus installed. A 10.04 and a 9.04. In 9.04 if I watch a video in Firefox, it works very smoothly and nicely. Using the "top" command, totem-plugin-vi is using the most cpu time at about 16%, and xorg is using about 5%. But in 10.04, totem is using 72% and xorg is using 26%, and it looks really choppy. I think watching DVDs is the same way. I checked "Hardware Drivers" in the System->Administration menu (in 10.04) and it said "No proprietary drivers are in use on this system." I'm guessing maybe I'm using different video drivers on each system, but I don't even know how to check. and here is an excerpt from lspci:
The local files copy speed from one disk to another is satisfyingly 60+MB/s, but this cause problem to other application, windows gray out, firefox freez, basically i have to wait for the copy to end to return to the computer.
When I'm trying to login to the ftp server with appropriate username and password its taking almost 10-15 seconds to authenticate making the login process slow, even when I'm uploading files its again hanging for 10-15 seconds before completing the job successfully. Its not like its happening every time, but 7 times out of 10. Any idea how can make the authentication fast?
I'm running 11.2 on a Dell Latitude D630. Everything has been great other than issues with Suspend and hibernate not working correctly. However after about a month of use browsing the web has really slowed down bad. It doesn't seem to matter if it's in Firefox or Chrome either. I am using a wireless connection that is getting the signal at 85-95%. Even though I still tried using a direct cat-5 connection to the router and it was still slow as well. I have the fastest broadband connection my cable offers and my windows boxes on the same network are still flying.
I have a fileserver running 10.04 server 64bit and samba. I connect it to my desktop which is 10.04 desktop 64bit.I have the server mounted on my desktop in fstab as://10.0.0.2/share /media/share cifs guest, uid= 1000.Up until 30 June 2010 it was all fine. Now when I write the server it is very slow e.g. 2Mbps though when I read I get >100Mbps so I think my network is still ok. If i use nautilus smb://10.0.0.2/share I can write at >100Mbps and also read at >100Mbps...So any ideas why the write speed via the fstab mount samba has started to go really slowly in the last couple of days?
I have a strange problem that I cannot diagnose -- slow wifi. About two or three weeks ago, I noticed that my wifi began to slow down dramatically. The problem is the same on my ubuntu laptop and my ipad. But when I connect directly to the router with a network cable, my connection is fine. So it is definitely a problem with the wifi signal itself.
What I have done so far: (1) I checked all of the MAC addresses connected to my wifi. I identified my tivo box, laptop, and ipad. Nothing else.
(2) I used [I]wifi analyzer[I] on my android phone and selected an unused wifi channel (channel 8).
(3) I changed the username/password on my wifi signal. I also changed the password for my router's admin privileges.
(4) I restarted my router and computer. None of this helped.
I'm having an issue where a server in CA (1000/full) and in VA (100/full) have very lopsided data transfer.
CA -> VA with iperf shows ~20Mbps VA -> CA with iperf shows ~93Mbps
If we change the CA server to 100/FULL, transfer speed is 93Mbps both ways.
Some tuning was done to TCP window scaling parameters, but it won't correct the issue, just improve the CA -> VA numbers to what is listed above. I will say, turning TCP window scaling OFF will lower the transfer speed both ways to < 20Mbps.
The only clue I have when looking at wireshark dumps is that the window scale going OUT would never go past 10240 (scale is 8, so 2^8 x 40bytes). In the opposite direction, the window size will go above 3MB (scaled).
It is not a bandwidth problem as iperf with UDP shows 93Mbps both ways. Local transfers (CA 1000/full to CA 100/full) show full speed both ways, so I feel it is strictly related to TCP window scaling.
RedHat 5 64-bit on both sides. Any ideas why it won't scale above 10240?
I created a the class like this for shaping the packets with a specified bandwidth rate.....
tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: htb default 15 tc class add dev eth0 parent 1:0 classid 1:1 htb rate 750kbit ceil 750kbit tc class add dev eth0 parent 1:1 classid 1:3 htb rate 600kbit ceil 750kbit prio 0
For Our Requirement:-
I dont want to specify the bandwidth rate strictly like this rate750kbit ceil 750kbit,based on whatever speed is coming which should allocate the bandwidth rate for particular class...I need one application for finding the upcoming bandwidth & Is any other method is there for specify the bandwidth rate in a classes.
I've recently upgraded my broadband from a 1Mbps to a 10Mpbs from the same ISP. As such there is no change of hardware.Tried downloading a Linux iso from my laptop and is getting a very good download speed. However, noticed that when I am downloading the iso, all the bandwidth is taken up by the download. Even my browser is unable to refresh or load a new webpage. Boot into Win7 (same laptop) and the webpage is ABLE to load. Tried the same in my netbook (openSUSE) and the webpage is ABLE to load. Tried the same in my Debian desktop and the same problem returns, while downloading files (iso or video) all the bandwidth is taken by the download. Browser unable to refresh. Computer connected to the same wireless network is also deprived of the bandwidth. So the bandwidth (from the wireless router) is totally taken up by the downloading Debian laptop or desktop.
I've recently upgraded my broadband from a 1Mbps to a 10Mpbs from the same ISP. As such there is no change of hardware. Tried downloading a Linux iso from my laptop and is getting a very good download speed. However, noticed that when I am downloading the iso, all the bandwidth is taken up by the download. Even my browser is unable to refresh or load a new webpage. Boot into Win7 (same laptop) and the webpage is able to load. Tried the same in my netbook (openSUSE) and the webpage is ABLE to load.
Tried the same in my Debian desktop and the same problem returns, while downloading files (iso or video) all the bandwidth is taken by the download. Browser unable to refresh. Computer connected to the same wireless network is also deprived of the bandwidth. So the bandwidth (from the wireless router) is totally taken up by the downloading Debian laptop or desktop. Is there something in Debian that I've need to configure?
can I index files so whenever I do a find somethingdo find / -name libSDL-1.2.so.0It doesn't take 10 mins. I do know there are packages such as tracker but that one does indexing all the time. I would be happy to have something which can be done by hand or something which is done once in 12 hours or so.
I was compiled a new kernel version 4.4 of debain 8.2 amd64, i've issue > tsc: Fast TSC calibration failed, and I'm stay on busybox. Fortunately, I've backup and i can to boot on 4.3 version, it's a choise on startup OS !
I thinking that i compile a new kernel with 4.3 version, i do it, and now have got a same issue (tsc: Fast TSC calibration failed) !
I think that there are adding a new modules while compilation of 4.4 version. I've saw to google if this issue are listing, the answer is yes ! but now i have only 3.6 version for booting ...
Fresh install of Debian 8.1, have not changed a single setting anywhere. Was scrolling in the web browser and noticed that if I scrolled up or down fast enough the active window changes. Using kde as the desktop environment. Also this has nothing to do with the browser as it happens with anything I have open. Heck trying to scroll in console and having a document open just flips between the two of them. Only way this does not happen is if I scroll slow enough.
I ask to linux experts if it is possible to easily create a base debian distro that allows wireless networking and synaptic utility (and the basical tools) aimed at booting very fast, from hard-disk or stick. After installation, it would be possible to install other stuff, not much in fact. It should be able to recognize the hardware and then boot without checking everytime.
We have an Apache Subversion (http) server for hosting our codes, and, for the 3 next month, we are behind a DSL connection (max upload 100 kB/s).
When a remote co-worker try to download a new fresh copy of our projects on his computer directly over http, the transfer goes fine : with a bandwidth monitor (gnome-system-monitor or bwm-ng) we can see that the server is trying to send ~95kB/s and the connection remains usable for others task in parallel (just a bit slower, which is normal).
But : when the remote co-worker is connected through SSH to this server, and uses tunneling to communicate with Apache Subversion, the server is sending more than 200kB/s : the connection is not usable for other tasks during the transfer as with ~102kB/s actually transferred through the DSL Line, it's completely congested and more than fifty percents of the packets are lost.
I think that I understand why : TCP/IP auto-detects the max amount of successfully transmitted bytes per second, and try not send more than this maximum value.
When the Apache server is connected to the local instance of openssh-server through localhost, packets are transmitted successfully between them. Only after, openssh-server try to send it to the client (and should retry if it's not successfull) but during that time, Apache is already giving the next one... giving this saturation effect (Apache is not aware of the saturation, or at least, not enough)
Can squid do 'fair bandwidth sharing' ? What i mean is, if there is 1 user online on a 4mg line, that user will be using the entire 4mg line speed, and if there are 2 users online, each user will have 2mg line speed, and so on. I have squid cache set up already, but i just need to know how bandwidth distribution/sharing can be handled Can squid also be used to limit/disconnect users after they have used up their allotted bandwidth? [I have a mikrotik router connected to the adsl (for wireless users)]
I just wanted to use a network bandwidth usage monitoring application. Scenario: I am using an EV-DO based USB broadband modem with a limited GB plan. For additional data usage they charge per MB. Currently I use either wvdial (mostly) or pon to start the connection. So if there is any network monitoring application which could log time used and data used for the session, it would be great. Actually debian has too many different network monitoring applications, But I am not sure which one suits well for this purpose.
I've been trying to forward some ports using iptables for some time now, but still haven't figured out how to get it to work..What i'm trying to accomplish is to forward all traffic from port 80 to port 8080, and all traffic from port 443 to port 8443, this because i would like to run tomcat as a non-root user, and the original ports can only be used as root.. I've currently setup my iptables like this:
# Generated by iptables-save v1.4.2 on Wed Nov 10 16:44:45 2010 *nat :PREROUTING ACCEPT [39350:6120333]
We have a sipmle office network set up that we also use use to connect to the internet, however of late the number of users has increased thus slowing internet access. Bandwidth upgrade is not an option thus i have to do bandwidth shaping on our linux router. The question is how do set the squid configs to allow certain IP's range a certain percentage bandwidtheg 60% and furthe divide the rest. Alternatively how can allow certain IPs to have higher bandwidth access.
On host running; $ ssh -XfC -c blowfish user@guest_IP xterm$ /usr/bin/X11/xauth: error in locking authority file /home/user/.Xauthority X11 connection rejected because of wrong authentication. xterm Xt error: Can't open display: localhost:10.0 (hanging here) /home/user/.Xauthority is an empty file, just created. $ sudo ls -l /home/user/.Xauthority-rw-rw-rw- 1 user user 1 2010-07-19 03:16 /home/user/.Xauthority
Nor lock exists. The password is correct. $ ssh user@guest_IP xterm I can connect the guest.
This is where it starts: I have 2 networks. The first: 192.168.1.0/24 composed by the router which has access to the internet with the IP 192.168.1 and the server (who is a gateway) with the IP 192.168.1.42 The other network: 192.168.2.0/24 composed by the gateway with the IP 192.168.2.1 and the clients (on the 192.168.2.0/24 subnet). To sum up, the gateway has 2 IPs (192.168.1.4(eth0) and 192.168.2.1(eth1)). On this gateway, I have squid installed (and listening on port 3128). I also made a redirection to redirect some computers who want to access to the web (port 80) to squid (port 3128) with this command: /sbin/iptables -t nat -A PREROUTING -m mac --mac-source CLIENT_MAC -p tcp -m tcp --dport 80 -j REDIRECT --to-port 3128
At this stage, everything works fine. The clients can access the web by the proxy without "knowing". What I wanted to do, is redirect also the port 443 (HTTPS). Actually, when a client wants to access to, for example, [URL]. He cannot. So I would want to be able to redirect people (without passing by any proxy) directly to google. Like a NAT. But the problem is that I can't. The thing would be to, in the gateway, take all the packets with port 443 in destination and handle them to the router 192.168.1.1. Then, when the router sends the packet back, the gateway takes the packet and handles it to the client. I tried putting ip_forward to 1, but the problem is that all IPs and ALL PORTS are forwarded. And I just want port 443 to be forwarded.
I have changed my home network a bit, and everything works extremely fast, except for my apache server, which seems to serve webpages so slowly, it is abnormal. Even simple directroy pages are served in around 5 sec, and this is not a joke! It used to work pretty fast. What I have changed is that i replaced a hub with a switch, and that I linked my personal PC with my server together through a hub. I also installed Gnome desktop on the webserver machine. My network now looks like follows:
Installing 11.2 from KDE LiveCD on an IBM ThinkCenter with 3.2Gb CPU and 1Gb RAM. Ubuntu 9.04 on first two partitions. I go through the configuration, click to 'install': Install display bars remain blank. After 2-3 minutes, black screen with scroll of attempted installation pieces and the error message: "Respawning too fast. Disabled for 5 min." Freeze.
Other posts mention problem with init. But this is happening with the install so not able to address that. No apparent md5chksum for LiveCDs. No mention of this problem in installation help guide. Does anyone know how to deal with this? If you need more info, I will provide. Though it seems this is not an unusual problem when booting an installed system, there's no mention of it happening during installation.
I've had Debian on my laptop for around 4 months which I rarely use. I'm using Squeeze since it seems to be the only release that will work with my ethernet card.The internet had been working fine for a couple of months but broke when I tried to allow port forwarding for torrents. I could only connect to the internet after this by using: