I'm having a strange problem with data transfers between systems. I have a file server + my desktop. Both are running Debian 8.3. I have a samba share running on the file server and I mount the shares on my desktop on boot via /etc/fstab
When I copy a file using the nautilus from my home folder (on my HDD) on my desktop to the mounted network location, my transfers start out at gigabit speeds 80MB/s-90MB/s for a couple seconds and then drop down to about 8MB/s
But when I terminate the transfer and then use scp to transfer the same file, I get consistent gigabit speed throughout the transfer. I am not sure what is going on.
I'm trying to create an Ubuntu Server file server that will handle large file transfers (up to 50gb) from the LAN with Windows clients. We've been using a Windows server on our LAN on the file transfers will occasionally fail... though the server is used for other services as well.
The files will be up to 50gb. My thoughts are to create a VLAN (or separate physical switch) to ensure maximum bandwidth. Ubuntu server will be 64bit with 4tb of storage in a RAID 5 config.
When I try to transfer a large file lets say 700Mb or so my wireless shuts down and i have to restart my ubuntu computer the other computer is vista.ubuntu is on a wusb54gver4 and vista is going through a wrt54g tm running dd-wrt mega.I
I'm planing to copy a productive mysql innodb file from one server to another, and the file size is around 300GB. As the file is keeping changing all the time, I have to shutdown mysql instance and copy the large data file to other server as quickly as possible.I should have to find a way to speed up file copying ... I'm wondering whether there's a way to copy file block by block.If the destination side block has same content, then bypass it.
When accessing an NFS mount for a large (200MB+) file transfer, the transfer starts rapidly, then becomes slower and slower until it hangs. On several occasions, it has frozen the client machine. Both client and server are set to default to nfs version 3. Slowdown and hang also occur when connecting to FreeBSD NFS mounts.
Presumably (I hope), there is some sort of configuration for the client that needs to be set. what should be changed in the configuration? This worked out of the box in OpenSUSE 11.0.
I have two machines running open vpn connected trough a 8 port Gigabit switch (Tp-link TL-SG1008D). The problem is when I try to transfer a file the transfer starts at 13 mb/s and drops to 300kb/s. When I replaced the ram on the two machines and restarted the transfer the speed was 13mb/s constant. How does changing the ram makes such differences (at the first transfer they had 512mb at the second transfer the machines had only 128mb) ? Os Ubuntu Server 10.04.2 LTS
PS :Tried with 256 ram and the problem persists... Machines: two hp d530
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
auto lo iface lo inet loopback # The primary network interface #auto eth0
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
what might cause a file transfer to start out fast (11-12 mb/sec) and then drop to about 1 mb/sec after having downloaded about 2 gb? This is what happens when I download files from my old fit-pc 1.0 (a small mini PC with 256 MB/RAM, an AMD Geode processor, and a 100 mbps network card) via ftp over my local network. Is it some buffer filling up?
I was directed to some article about "bufferbloat" earlier [URL], but I did not get any solution from that unfortunately. I have tried increasing the server's TCP buffer using these instructions: [URL] , but it does not make any difference. By the way, I am running Ubuntu Server 10.10 on the fit-pc and Ubuntu 10.10 desktop on the connecting client (which is just a normal Core 2 duo, 4 gb ram computer).
UPDATE: I should add that things were working fine when I was using ubuntu 7.10, but are not now that I have installed 10.10...
UPDATE 2: I just found out this is ONLY a propblem when transferring files to ubuntu. If I use winscp from Windows 7, the transfer speeds are fine...
UPDATE 3: It seemed it was gFTP that was the culprit. Transferring the files using filezilla instead in Ubuntu yields an excellent rate of 11.8 MB/sec. Anyone have an idea what might be wrong with gFTP?
We have a server and we have instales an Open suse 10.3 on it. We created a Samba server also. Made to share folder, that we acces from network from other computers that have xp.
The problem is if we try to copy from server it is very slow only 100-300kb/s. The strange thing is that if i copy 1 file then its slow but if i start to copy another one the speed gos up to 10-15mb/s. Evry time i want to copy somethin or install from that server i need to start another copy. If i copy from a comp to that server the speed is normal only if i copy from server its slow.
I have one Linux server equipped with WiFi . I want to measure data rate speed on this connection . Is there any utility on my Linux that can measure data speed on one specific Ethernet connection when transferring large size files through WiFi connection?
I have a problem that I can't seem to fix.When I try to transfer a large file lets say 700Mb or so my wireless shuts down and i have to restart my ubuntu computer the other computer is vista.ubuntu is on a wusb54gver4 and vista is going through a wrt54g tm running dd-wrt mega.I have tried every thing i know with no luck.
I have few different usb cards and socets , some of them i have really hotwired.. and i would like to find out where to keep my usb-hdd . few months ago i would use "total commander" it would show .. copying-444Kb/s but these times are over...
I just bought a 1Tb hdd and reformatted to FAT32 it is connected via eSATA to SATA cable.At first I transfered music that initially was getting speeds over 100mb/s (according to nautilus) after about a minute it started to dwindle. Now most files will range somewhere between 8-20mb/s although I've only seen a couple go beyond.i edited hdparm.conf to enable DMAHere is the output of hdparm -Tt for both internal(sda) and external(sdb) hardrives.
Code: /dev/sda: Timing cached reads: 14872 MB in 2.00 seconds = 7444.68 MB/sec Timing buffered disk reads: 372 MB in 3.00 seconds = 123.91 MB/sec[code]....
What is the issue this output leads me to believe that I should be getting the same speed I once got.I'm running Intel Core 2 duo at 3.00, ASUS P5Q (enabled AHCI) and 4gb ram 12gb linux swap. Also Karmic 9.10
I'm at the end of my rope here. And the most frustrating part is, I've found many posts all over the web from others who have had similar problems, but they ALL seem to be older distros and those solutions don't work for me. "Find Similar Threads" only yields one thread, and I don't have the file mentioned in the answer (the original poster didn't either).
Computer: AMD 64 Quad core, USB 2.0 on front bus, dual booting Vista and Ubuntu 10.10 Device: New Sansa Clip+ 8GB mp3 player
Problem: When I transfer files to my mp3 player in Ubuntu, I max out at about 150 KiBbps. It's painfully slow - it takes me half an hour to transfer 300 MB worth of albums. I had similar problems with an older player that may have been usb 1.1, but this one is brand new. And when I switched to Vista (first time in months, thankfully), I was able to transfer the same files at upwards of 3 MBps. I know that flash devices often can't write anywhere near the max speed of USB 2.0, but obviously it can do a whole lot better then 150 KiBps!So, same hardware, same files, different OS - the problem must be with my Ubuntu config.
Hardware: Sun T-2000 with Solaris 10 5/09 U7, ZFS root and RAID (what subversion is writing to)Software: Subversion 1.6.12, Apache 2.2.11, db-4.2.52 ( and all related dependencys of course)Everything was fine until today, I have someone come over and they are getting this error when doing an import: svn: Can't write to file /DATA/* : File too large After some testing it seems it does this on files larger than 2G in size, but after googling until I could not google anymore I could only find people having this issue with Apache 2.0 or using APR lower than 1.2 (mine is 1.3.3). Is there a files size limit inside Subversion?
I've just managed to access my windows share from Ubuntu but am now getting write speeds (to xp share) of only 9MB/s over my wired network, anyone have any ideas as to why this may be? I've googled but can find no specific answer.
My NIC is: Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) (from lspci) on a Dell studio 1737)
i don't know if this is a slackware related issue but i have the following problem.I'm running a slackware64-current on my system. For my private data I'm using a QNAP NAS (Some ARM CPU with linux kernel 2.6.22), the file shares provided by NFS. I mount them withmount -t nfs 192.168.0.2:/Public /mnt/qnapWorks fine, no problems.But now, if i try to copy some large files ( > 1GiB) to the NAS share, sometimes the systems completely freezes during the copy process. I have to do a hard reset to bring the system back to work
Is there a way by which i can improve my Internet speed. I have a 100Mbps connection but the download speed is only 100kbps. I know that my ISP has limited by connection speed, but i am curious to try as to how i can get the maximum speed.
I am getting extremely poor write speeds with my RAID. My setup is as follows: > HP Proliant Microserver > 4 2TB Samsung F4 drives > 60GB root drive > Ubuntu 10.10 64bit > mdadm 3.2.2. I have two of the above servers, connected via gigabit LAN. The read speeds I am happy with, but write speeds are running at about 20Mb/s at the very best, Especially when people with a similar setup on a machine like this are running at about 90MB/s write speeds. My array is as follows (mdadm --detail -D /dev/md0):
I have also started the partitions at 64 instead of 63 in a bid to correctly align the partitions, although this doesn't seem to have made any difference.
Linux beginner, first project is a NAS based on Debian Lenny. Have SMB and AFP running. Hooked to a gigabit network. 8 disk RAID6 with mdadm. Need the NAS to back up our production company's video files.First attempt at back up was very, very slow on smb and afp. Used a Mac disk benchmark app, shows 100+MBps reads, but only .4MBps (as in 400kbps!) writes. Went through office LAN, and also tried just cabling the two machines (MacPro and Debian NAS) directly. Same results. Not sure what the write speeds are internally, as I don't know of any Linux/Debian disk testing software.
When I copy files to my External NTFS HDD using Ubuntu the write speeds are about 10-12 MB/sec, but when I copy files using Windows the write speeds are about 25-30 MB/sec.
Exact same files, tried all three ports on my netbook and even timed it to see if the speeds are by any chance miscalculated by either operating system and Ubuntu is definitely writing at half the speed.
So what could be the problem? When I had Windows on this Netbook I never got had a problem with write speeds so I don't think it is a hardware issue.
I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.
I'm trying to back up my hard drive to a 2 TB WD external so that I can do a clean install of 10.10, however I'm getting tremendously slow write speeds. It hovers around 1.5 MB/s and steadily slows from there. It tells me it will take 150+ hours to transfer 400 GB of data.
I have a AMD quad core processor and 8 gigs of ddr3 ram... USB 2.0... I feel this should go much faster.
i am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
[root@sa1 logs_os_backup]# less remote.log Wed Mar 10 09:12:01 AST 2010 building file list ... done bkup_1.tar.gz deflate on token returned 0 (87164 bytes left) rsync error: error in rsync protocol data stream (code 12) at token.c(274) building file list ... done code....