Has anyone else noticed that file transfers on Maverick are significantly slower than on Lucid? A review of Maverick on Tom's Hardware finds that it is considerably slower. My own test says: On 10.10, file copy between two NTFS drives maxes at 30 MBytes/s; on 10.04.1, the same operations clock in at 40 MBytes/s. Both drives are capable of rw speeds of 70-90 MBytes/s, which I got on WinXP, and hdparm's cached read results back that up.
why file transfers are so slow, and especially why Maverick is even slower than Lucid?
I have a server set up as an NFS share, and the share mounted on my laptop. Using linksys wireless g router and 15mb internet connection. Laptop is on wireless connection and server is wired.While transferring files from the laptop to the server I only get about 55kb/s. Is this normal for wireless g?
Has anyone else noticed that file transfers on Maverick are significantly slower than on Lucid? A review of Maverick on Tom's Hardware finds that it is considerably slower. My own test says: On 10.10, file copy between two NTFS drives maxes at 30 MBytes/s; on 10.04.1, the same operations clock in at 40 MBytes/s. Both drives are capable of rw speeds of 70-90 MBytes/s, which I got on WinXP, and hdparm's cached read results back that up. Any ideas why file transfers are so slow, and especially why Maverick is even slower than Lucid?
I am running Samba on a debian Lenny box on a wireless home network. I find that file transfers to the samba share are very slow. It takes over a minute to copy a 40MB file to the linux box, but only 20 seconds to copy the same file to a windows XP box on the same network.
Anyways, I could use a little direction on how to proceed with this, I'm really not sure where to start,
When ever I transfer large files using cp, mv, rsync or dolphin the system will slow down to the point that it's unusable. It will sometime hang completely and not accept any input until the file is transferred. I have no clue what could be causing this problem but I know it shouldn't be happening.I am using Fedora 15 (184.108.40.206-0.fc15.x86_64) with KDE 4.6. have a Phenom II 955 processor, 6 GB of system ram and the OS and swap file is on an 80 GB SSD. Copying files in the SSD doesn't cause any problem, but moving files between my other two large HDDs causes the extreme slow down. Using htop I can see that my system load jumps to between 3 and 4, but my RAM and CPU usage stays low during the transfer. Here are two commands that take about 10 mins to run and make the system unusable while it's running. It usually transferring around 2-20GB worth of data during the transfers:
cp -a /media/data.1.5/backup/Minecraft/backups/* /media/data.0.5/backup/Minecraft/backups/ rsync -a /media/data.1.5/backup/ /media/data.0.5/backup/ /media/data.1.5/ is the mount point for a 1.5 TB internal SATA drive, and /media/data.0.5/ is the mount point for a 500 GB internal SATA drive.
1. When I'm not logged into the server, only the shares are visible on my Windows computer. Clicking on the share folder displays an error message. As soon as I log in at the server, the files within the shares become accessible on the Windows box.
2. File transfers between the machines are extremely slow. Watching the system monitor, there's a brief burst of network activity followed by 10-30 seconds of nothing...on a gigabit network, the effective transfer rate is ~120kbs. There's no other network activity going on that would account for this behavior.
I just installed ubuntu as my primary OS, but I have the disk with XP on it and I don't want to go back, but I need faster network connectivity. I have a T60p with Intel Gigabit jacked into my Gigabit router which also has my desktop (running XP) and my NAS. If I FTP files from my NAS (or SCP), I get transfer speeds around 250-500 KB/s (which is not very fast). On this same switch, from my XP desktop I get transfer speeds around 12 MB/s. I get the same speeds using my 802.11n card (Atheros) as with the ethernet NIC (250-500 KB/s).The drivers for the ethernet card and the atheros card are e1000e and ath9k respectively.I have disabled IPv6. Since the problem occurs using either interface, I am just going to concentrate on fixing it for the Ethernet interface (since I believe it to be a systemwide problem).
skinnersbane@albert:~$ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full
Clearly my card is running at Gigabit, but why the bad transfer speeds? I am using filezilla for FTP (technically FTPES). I closed every other program. My CPU utilization does seem high and I wonder if this is part of the problem. I had no problems with throughput using either interface in Windows XP just one week ago.
With Ubuntu 10.10 server installed on an - AMD Athlon II X2 235E CPU AM3 - 2GB DDR3
Installed on a software raid 1 of 16 GB usb sticks that can sustain 24MB/s read and 13MB/s write rates running - ssh - zfs-fuse installed from apt - samba
With the zfs using the sata II controller, with a 2TB, 1.5TB and 500GB in a raidz1 (I know, it's uneven, I'll be changing them all to 2TB once I get the data off those drives).
However I'm experiencing a REALLY REALLY exceptionally slow zfs performance of only 600KB/s transfer rate even when using the command:
To test. CPU usage is only at 25% (with dedup=on) however I notice that the usb sticks are regularly transferring data during this write to the zfs pool. Also when doing a zpool scrub it completes 9.6GB of scrubbing in well under 2 minutes. Admittedly it shouldn't actually be doing much as no data was changed by me while the 3rd drive was unavailable.
I'm thinking that there is some data being temporarily saved to the main system drive (usb sticks) and then transferred to the pool, and this is causing a bottle neck. Note both 16gb usb sticks are plugged in to the same usb 2 host controller, but the combined write bandwidth of those sticks is not enough to saturate a usb2 bus.
I am having a problem with slow data transfers with both Samba and scp. I have gigabit NIC's on both all three machines that I am transferring to and from, connected to a gigabit switch. My data transfers under both smb and scp average around 21 MBit/s, (I am using nload to monitor transfer speeds).The machines are configured as follows,1) desktop
AMD Athlon 64 X2 6000+ 6 gig Corsair memory Realtek RTL8168C(P) gigabit NIC (on board)
I recently built a home media server and decided on Ubuntu 10.04. Everything is running well except when I try to transfer my media collection from other PCs where it's backed up to the new machine. Here's my build and various situations:
Intel D945GSEJT w/ Atom N270 CPU 2GB DDR2 SO-DIMM (this board uses laptop chipset) External 60W AC adapter in lieu of internal PSU 133x CompactFlash -> IDE adapter for OS installation 2(x) Samsung EcoGreen 5400rpm 1.5TB HDDs formatted for Ext4
Situation 1: Transferring 200+GB of files from an old P4-based system over gigabit LAN. Files transferred at 20MBps (megabytes, so there's no confusion). Took all night but the files got there with no problem. I thought the speed was a little slow, but didn't know what to expect from this new, low-power machine.
Situation 2: Transferring ~500GB of videos from a modern gaming rig (i7, 6GB of RAM, running Windows7, etc etc). These files transfer at 70MBps. I was quite impressed with the speed, but after about 30-45 minutes I came back to find that Ubuntu had hung completely.
I try again. Same thing. Ubuntu hangs after a few minutes of transferring at this speed. It seems completely random. I've taken to transferring a few folders at a time (10GB or so) and so far it has hung once and been fine the other three times.Now, I have my network MTU set from automatic to 9000. Could this cause Ubuntu to hang like this? When I say hang I mean it freezes completely requiring a reboot. The cursor stops blinking in a text field, the mouse is no longer responsive, etc.
I tried using ssh between my netbook and desktop, but it was going to take around 30 hours to transfer 39GB over the home network. Also SSH is very sketchy and often drops connections.I've been messing with it all day and I'm quite frustrated.What I'm looking to do is use my netbook as more of a primary computer and the desktop as a storage computer. Not quite a server, because I'd like to still keep a GUI on it. I'd like to be able to keep my music and movies on the desktop and stream them to the netbook (SSH sucks for this, always drops connections). I've already set up the web client for Transmission bit torrent client so I can torrent on a machine that's almost always on and connected.
Is there a better setup for all of this? I like the netbook because of the portability; I like the desktop because it's always connected (for torrents) and it has a larger storage capacity. It would be mainly used around the house. I would like to back up a file or two while abroad, but I'm not looking to stream music while I'm across town or anything.
I need to transfer 330G of data from a hard drive in my workstation to my NAS device. The entire network is gigabit and being run with new HP procurve switches. All machines have static IP addresses. The NAS is a Buffalo Terastation PRO which has the latest firmware, is set to jumbo frames, and has just been upgraded with 4 brand new 500G drives giving us a 1.4TB raid 5 setup. My workstation is a dual Quad core xeon box running on an Intel S5000XVN board with 8G of ram. My OS is Ubuntu 10.04 x64 running on a pair of Intel X25 SSDs in a raid mirror. The data drive is a 500G SATA drive connected to my onboard controller. The file system on the SATA drive is XFS. This problem was ongoing before I got my new workstation, before we had the GB switches, and before the NAS got new drives. When I transfer a small file or folder (less than 500M) it reaches speeds of 10-11 MB/sec. When I transfer a file or folder larger than that the speed slows to a crawl (less than 2MB/sec). It has always been this way with this NAS. Changing to jumbo frames speeds up the small transfers but makes little difference in the big ones. I verified with HP that the switches are jumbo frame capable.
after successfully configuring the dwa-552 to work in master mode in ubuntu 10.04 (ath9k driver) I ran some file transfer tests. The download speed is very good (~50mbps) but the upload speed spikes at about 10-20mbps for the first few KB and then it's nonexistent (0-1kbps). This only affects file transfers or otherwise bandwidth consuming processes. Normal web browsing or ssh is not affected. After running a speedtest of my internet connection which is routed through the AP I could upload to the internet with 1mbps which is my inet connection maximum so apparently this is not affected. Tried the same file transfers with netcat to eliminate any other factors and had the same problem. dmesg and hostapd debug did not report anything unusual
I have good experience in microsoft enviroment, now tiring to use linux, i tried Ubuntu 9.10, OpenSuse on different computers bur there is same big problem: Very slow download speed compared to microsoft.same file at same time downloaded by microsoft winxp toke incomparable short time. for example file 5.5MB attached to e-mail on Yahoo toke ~1minute to download on winxp computer,same file at same computer but with Ubuntu takes more thane 30minutes!
A little over a year ago I was using SCP to successfully transfer large files over my LAN (exact same hardware). I can't seem to do this any more, and I'm not sure why.I think it's either something with iptables, or a network card driver problem. I use the same driver for both computers (b43 wireless). I can't do FTP transfers either. They start going, but quickly stall. I've used the Firestarter (iptables gui) to allow all the correct connections.One last thing: When I tried to connect to ssh using an alfa wireless card (not sure of the drivers), I couldn't even connect to ssh period. Same settings were used.
When copying files to USB drives, the file progress bar moves it 'bursts', sometimes doing nothing for long periods, then moving forward quickly and stopping again.It's almost like it is showing the transfer to the cache, not the transfer to the actual drive.
I recently installed ubuntu and noticed somewhere that it says "Now you can say ubuntu has iDevice support out of the box". Naturally i tried, resulting in a music library with all the album artwork mixed up. I've tried to sync a couple of times, even added the repository for libimobiledevice and searched for the newest version (said it was already installed). When trying to sync with banshee (or rythmbox), it says syncing on my screen and on my iphone's screen. After syncing, my iphone updates the library, no new music is discovered. I can also add that banshee discovers 100 extra songs on my iphone.
For example I am copying data to USB-flash drive using some file manager. When the file manager shows that transfer is complete, flash drive indicator continues to light. As far as I know this is some kind of caching system...
1. Is it OK to close file manager when transfer window is closed but the flash drive indicator continues to light (data is still being copied)?
2. Is it better if I turn off this caching technology?
I've got a server running CentOS 5.5. I used the automated iptables config tool included in the operating system to allow traffic for vsftpd, Apache and UnrealIRCd. When I send large files to FTP, even from the local network, it works fine for a while and then completely times out... on everything. IRC disconnects, FTP can't find it and when I try to ping it I get "Reply from 10.1.10.134: Destination host unreachable" where ..134 is the host address for the Win7 box I'm pinging from. This is especially frustrating as it's a headless server, and as I can't SSH into it to reboot I'm forced to resort to the reset switch on the front, which I really don't like doing.
Edit: the timeouts are global, across all machines both on the local network and users connecting in from outside.
I have a tenda wireless adaptor running on 11.04 natty. The USB id is: 148F 3070
Using the already installed drivers rt2800usb it can connect to all 13 channels but is very flaky, transfers slowing and stopping regularly. The rt2870sta driver is a lot more stable, but it will only connect to channels 1-11, but I need to use channel 13 due to massive wifi congestion on other channels...
I've tried iwpriv but it says there are no ioctls for the device.
Is there any way to get the installed driver rt2870sta to scan and connect to channel 13?
I've also tried to install the latest drivers from the ralink website: rt2870sta says it can see all 14 channels, but fails to scan. I also tried rt3070 driver but I cannot insmod as there are errors...
I have a dns-323 linux device that's running pure-ftpd with SSL/TLS authentication. Pure-ftpd is sitting behind a linksys router with IP 192.168.1.51. Pure-ftpd is configured for port 8021 and passive port range 55562-55663. The linksys router is configure to forward port 8021 and the passive port range to 192.168.1.51.
From outside my network I can connect to the ftp server using the WAN address of the router. I'm using filezilla 2.2.32 as my client and I choose FTP w/ explicit TLS (no other option will connect). The client will authenticate successfully with pure-ftpd but once it sets up the passive data connection and tries to do a LIST of the root directory, there's a timeout. I'm assuming this is because the passive data connection is not working. In pure-ftpd, I tried changing the passive address that it reports, to be the WAN address of the router, but it did not make a difference. I included the log from filezilla below.
I'm having a few problems with my Creative Zen and Banshee after swapping computers and upgrading to openSUSE 11.2. At first I was having problems because it wouldn't detect, but that seems to be about resolved now (although I do sometimes have to unmount it in Nautilus then disable and re-enable the MTP extension in Banshee at the moment). Now the problem appears to be that it'll transfer, but that the tracks won't play.
I'm using Banshee 1.5.2 with a custom build with this MTP on 64-bit patch added. I manually manage the device (I've got 10GB of music and an 8GB player, so I use a smart playlist that I manually sync with) and have been testing with just a few tracks at once - MP3 and M4A (iTunes+). Tracks generally copy over okay (although one MP3 in particular seems to freeze during transfer) but when I try to play them I get "There is a problem playing this audio".
I've tried Gnomad, but the tracks didn't show up at all, and I've tried Rhythmbox, but it doesn't sync album art, even for tracks I know it has the art for.Anyone had similar experiences or have any ideas? I've had all of these songs on there before, so it is annoying that it isn't working now.
As an example, I have two servers, sm-i222 and fileserv. sm-i222 is a Win2k3 system running cygwin. fileserv is a linux box running RHEL 4.7. sm-i222 maps /cygdrive/c to the c: drive and /cygdrive/d maps to the d: drive(actually a single 4TB RAID). from the sm-i222 server /cygdrive/c I call a small script from the crontab. The internal IP for fileserv is 10.0.0.7. See below.
These three lines perform well in that they make a full transfer of the fileserv:/home/ directory on fileserv to the appropriate place on sm-i222 using rsync. I use rsync instead of scp because I have to traverse subdirectories and symbolic links in the /home/... filesystem on fileserv. What I'm looking to do is use rsync to do an incremental transfer/backup of only the files that have changed since the last full backup. I'll manage the times I do this manually or in crontab. A colleague says this is do-able, but not how. Rsync.org says this is do-able but not how. Cygwin says this is do-able... see rsync.org. I believe what I'm looking for is a single rsync line like I have above that only transfers the changed files on fileserv to sm-i222.
I am planning to implement hardware load-balanced DNS servers. There will be one master and three slaves in the server farm. I will have two virtual servers associated with the server farm that will be listed as external nameservers for our domain.
BIND uses the list of NS records to determine the servers that need zone transfers. The zone NS records will not be the addresses of any of the real servers. How do I tell the master to do zone transfers to the real slave servers?
i have a bit of a problem and it seems i am not alone, google gave me a lot of hits but no solution.i'm running a dual-boot system on my laptop.
ubuntu 9.10 is installed on ext3 windows xp sp3 on ntfs files are on another ntfs partition
if i want to copy a file (or folder) bigger than 250mb my laptop literally gives up until the file transfer is complete. opening up a terminal via keyboard shortcut gives my an idea how long a ice age can last.this happens with ext3 to ext3; ext3 to fat32; ext3 to ntfs and ntfs to ntfs.today i wanted to backup some files (12GB ~4k files) the first 4gb ran with 20MB/s (all open applications grayed out). after 2h i came back to see that only 6GB where done.
When I try to play a backed up dvd, .iso file, with VLC in 11.04 it hangs quite a bit and seems like it's loading something (an orange bar shows up and slowly gets less over the volume) This only happens in VLC on 11.04 but does not happen in GnomeMPlayer when I open the same .iso file.
I have 8.04.3 server-32bit. It is on an HP 2.3ghz Pent 4 with 512 mb of RAM.I had previously installed Ubuntu and Kubuntu in a dual boot with XP (actually had a triple boot with XP, Ubuntu, and a BETA Windows 7) for a while, Grub got messed up so I ended up wiping it and just put my XP back on it. I have been intimidated by the command line and when thinking server, I tried the 30 day deal with windows Home server, its OK but it's $100.00 for the permenant version.
Anyway, I manned up to the challenge, installed Ubuntu server, set up as SSH and Samba. I administer from PuTTy and WEBMIN, I tried TightVNC but the GUI seemed very useless, so on a reinstall I went headless.The problem is I have alot of MP3 files I want to transfer back to the server that I had previously transfer to and back again from the Windows server, it was sceamin' fast compared to this. I have found threads talking about using NTFS, I have the EX3, or whatever the Ubuntu format is. Could that cause slow transfers, from NTSF through Samba to Ubuntu?
For the past week+ I have been experiencing very slow file transfers, moving files from one HDD to another - both internal, connected with SATA, should anyone care.. When I move any file, regardless of its size, it is moved with 1.1 MB/sec at very best That's a major pain in the behind, even for moving small amounts of data! If anyone could point me in the right direction, or give me a solution