Server :: Large File Transfer Over Scp Or Samba Crashes Wireless Connection?
Feb 3, 2010
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
Code:
auto lo
iface lo inet loopback
# The primary network interface
#auto eth0
[code]....
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
I have a problem that I can't seem to fix.When I try to transfer a large file lets say 700Mb or so my wireless shuts down and i have to restart my ubuntu computer the other computer is vista.ubuntu is on a wusb54gver4 and vista is going through a wrt54g tm running dd-wrt mega.I have tried every thing i know with no luck.
I'm experiencing connection problem when transferring a large file from Windows 7 (Home Premium) to my Ubuntu 11.04. The transfer starts, but after a couple of seconds the connection drops and all the shares are unavailable. I'm also unable to connect to the server over ssh, and the only thing I can do to restore the connection is to reboot the server. The strange part is that this was never a problem a couple of weeks ago, and I've not done anything to the setup on either machines besides installing security updates.
I'm trying to create an Ubuntu Server file server that will handle large file transfers (up to 50gb) from the LAN with Windows clients. We've been using a Windows server on our LAN on the file transfers will occasionally fail... though the server is used for other services as well.
The files will be up to 50gb. My thoughts are to create a VLAN (or separate physical switch) to ensure maximum bandwidth. Ubuntu server will be 64bit with 4tb of storage in a RAID 5 config.
I just bought a HP 3085dx laptop with an intel 5100 agn wireless card. The problem: copying a big file over the wireless to a gigabit hardwired to the router computer only gives an average 3.5MB/Second transfer rate. If I do the same copy from my wireless-n macbook pro to the same computer. I get a transfer rate of about 11MB/sec. Why the big difference? I noticed the HP always connects to the 2.4 GHZ band instead of the 5GHZ bands...
[jerry@bigbox ~]$ iwconfig wlan0 wlan0 IEEE 802.11abgn ESSID:"<censored>" Mode: Managed Frequency:2.412 GHz Access Point: 00:24:36:A7:27:A3 Bit Rate=0 kb/s Tx-Power=15 dBm Retry long limit: 7 RTS thr: off Fragment thr:off Power Management: off Link Quality=70/70 Signal level=-8 dBm Noise level=-87 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0
I am not getting any errors. I don't know why the bit rate is not known. My airport extreme base station typically reports that the 'rate' for the hp is typically 250~300MBi and about the same for the MacBook Pro. The hp is about 6 inchs away from the base station. Is there anyway to get the rascal to go mo'faster? Is there anyway to get the rascal to use the 5GHZ band.
When I try to transfer a large file lets say 700Mb or so my wireless shuts down and i have to restart my ubuntu computer the other computer is vista.ubuntu is on a wusb54gver4 and vista is going through a wrt54g tm running dd-wrt mega.I
When accessing an NFS mount for a large (200MB+) file transfer, the transfer starts rapidly, then becomes slower and slower until it hangs. On several occasions, it has frozen the client machine. Both client and server are set to default to nfs version 3. Slowdown and hang also occur when connecting to FreeBSD NFS mounts.
Presumably (I hope), there is some sort of configuration for the client that needs to be set. what should be changed in the configuration? This worked out of the box in OpenSUSE 11.0.
Reading and writing works absolutely fine with small files but large files are tediously slow in writing to the server. (rw,no_subtree_check) are options in exported directories.
What is your experience with NFS and how can I speed up large file/folder transfer(write) speeds?
During NFS transfers my laptop crashes my entire wireless network. Laptop is a Samsung R20, Ubuntu 32-bit 10.04 with atheros AR5001 wifi and ath5k driver. What works: ->NFS is working. I have 3 other machines on wired connections that use NFS happily. And the NFS shares work fine if i plug the laptop into a wired connection. ->Wifi works. I can browse the internet and a large download (700MB) with bittorrent takes 8 mins and doesn't down the wireless.
What doesn't work: ->NFS and wireless together. Any NFS transfer over wireless eventually downs the wifi. Forl example streaming one 3mb mp3 in rhythmbox crashes wireless after playing for about 30 seconds. When the wireless fails it breaks for ALL computers and the only way to re=establish it is by restarting the router. Wired connections remain OK even when wireless is down (both from the same router).->I have a 64-bit desktop with wifi that also suffers the same problem but that doesn't matter so much since it also has a wired connection.
Every time I attempt to transfer over a large file (4 GB) via any protocol, my server restarts. On the rare occasion that it doesn't restart, it spits out a few error messages saying "local_softirq_pending 08" and then promptly freezes. Small files transfer fine.
Relevant information:
Ubuntu server 10.10 Four hard drives in RAID 5 configuration CPU/HD temperatures are within normal range
I'm having some trouble uploading large log files to our server using perl. We are required to upload files larger than 2GB (regardless of how infeasible that sounds). I have tried the same thing on two different servers:
Code:
1. Linux 2.6.32-24-generic #39-Ubuntu 10.04 i686 GNU/Linux Server version: Apache/2.2.14 (Ubuntu) 2. Linux 2.6.5-7.244-smp #1 SLES_9 x86_64 x86_64 x86_64 GNU/Linux Server version: Apache/2.0.49 Smaller files upload without issue, however when a file larger than 1048576000 bytes is sent to be uploaded, the browser immediately fails, yielding this:
Code: This web page is not available. The web page at blah might be temporarily down or it may have moved permanently to a new web address. Below is the original error message: Error 101 net::ERR_CONNECTION_RESET): Unknown error. The apache log gives some indication of the file size limit:
Code: Requested content-length of 5954683941 is larger than the configured limit of 1048576000 However, I have looked through the apache config files and can't seem to find where this setting for content-length is. Is there an absolute maximum setting for file uploads in apache? Is it also possible that this is actually caused by a Perl error?
I have the problems with transfer speed between samba and Windows XP clients.
Samba server configuration: Quad Core 6600 CPU. 4 Gb RAM OpenSUSE 11.2 with kernel "2.6.31.12-0.1-desktop" Samba - samba-3.5.1-1.1.i586 Test: 4 GB File copying. One file.
Transfer speed from Samba Server to Windows 7 and XP clients: (Windows clients copy file from Server share -> to local drive) From Server to Windows 7 client 1: 85-90 Mb/sec From Server to Windows 7 client 2: 90-100 Mb/sec From Server to XP1 client 3 75-100 Mb/sec
Transfer speed from Windows 7 and XP clients TO Samba Server: (client copy file from local drive -> to server Share) From Server to Windows 7 client 1: 12-20 Mb/sec From Server to Windows 7 client 2: 30-35 Mb/sec From Server to Windows XP client 1 20-27 Mb/sec
(Copying file from Windows local drive to Windows remote share) From Window 7 client 1 TO Windows XP client 1 40-50 Mb/sec From Window 7 client 2 TO Windows XP client 1 50-60 Mb/sec
Copying file from Windows 7 client 2 share -> TO Windows XP client 1 show me 100-120 Mb/sec speed permanent. Copying file from Linux hosts to NFS server is stable 50-90 Mb/sec bidirectional.
This part of my smb.conf file Code: # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2009-10-27 [global] log level = 1 debug level = 0 max log size = 50 .....
I have very slow write speed when copying file from Windows clients to Samba Share. Samba speed is slower than Windows native clients connections ?
I have configured openssh 5.8p2 with centos 5.6. My sftp is working fine with chroot environment but i am having problem with SCP. I am dealing with muliti Redhat servers. When i try to transfer data from other linux server through scp it gives connection refused. For e.g ssh 5.8 is configured on new server and i want to transfer files from old server which is using openssh 4.3 version.i created same username and password on new server as on old server.My sftp users on new server has no shell access but only sftp access. When i try to scp from old server to new server it gives error connection refused. Is the below configuration only for sftp and can't scp? According to google the configurations i found are for scp and sftp. Do i need to generate ssh keys by giving users on new server shell access, once created then stop shell access again, as i dont want to give shell access permanent for security reasons? but i want to use ssh keys for more security as well.
Port 22 PermitRootLogin no 1.override default of no subsystems[code].....
On my Lenovo T410 (openSUSE 11.4 KDE - 64bit) I have mobile broadband. Gobi-loader is installed and working properly. However when I try to setup mobile broadband connection the KDE Control Module crashes. Tried many times also after reboots, though KDE Control Module keeps crashing.
i am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
[root@sa1 logs_os_backup]# less remote.log Wed Mar 10 09:12:01 AST 2010 building file list ... done bkup_1.tar.gz deflate on token returned 0 (87164 bytes left) rsync error: error in rsync protocol data stream (code 12) at token.c(274) building file list ... done code....
I have a Mandriva linux box connected by cable to my router. When I stream audio using this box it crashes all the wireless computers on my network. There is no effect on wired computers. All but the one computer are windows (xp or vista). The affected wireless computers are not able to even find my wireless network. I have to reboot the router to get them online.
This is a recent problem, and I can't pinpoint any change/upgrade that would cause this. Rsync transfer from Client to Server: sent 11756196 bytes received 1032741 bytes 138258.78 bytes/sec total size is 144333466390 speedup is 11285.81 Pinging back and forth from each machine is fine. No Ifconfig errors Client, but Server has RX packet errors.
I would like to transfer my music library and movie collection from my Desktop computer running Windows Vista and my laptop running Debian Squeeze. I have the laptop connected via wireless but it's possible to connect the two either directly with a CAT5e cable or through the router. I'm just wondering what the best way to do this would be.
I have Fedora 12 (with all the latest patches, including the 2.6.31.6-162 kernel) installed on a new Supermicro SYS-5015A-H 1U Server [Intel Atom 330 (1.6GHz) CPU, Intel 945GC NB, Intel ICH7R SB, 2x Realtek RTL8111C-GR Gigabit Ethernet, Onboard GMA950 video]. This all works great until I try to transfer a large file over the network, then the computer hard locks, forcing a power-off reset.
Some info about my setup:
[root@Epsilon ~]# uname -a Linux Epsilon 2.6.31.6-162.fc12.i686.PAE #1 SMP Fri Dec 4 00:43:59 EST 2009 i686 i686 i386 GNU/Linux [root@Epsilon ~]# dmesg | grep r8169 r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
[code]....
I'm pretty sure this is an issue with the r8169 driver (what I'm seeing is somewhat reminiscent of the bug reported here). The computer will operate fine for days as a (low volume) web server, and is reasonably stable transferring small files, but as when as I try to transfer a large file (say during a backup to a NAS or a NFS share), the computer will hard lock (no keyboard, mouse, etc.) at some point into the transfer of the file. It doesn't seem to matter how the file is transferred (sftp, rsync to NFS share, etc.).
how to transfer large files from my laptop to external hard drive. Problem occurs when I'm sending Blu-ray films (4.4GB) to external, gets to 4GB and then comes up with error. Is there any way of breaking it up and then merging when it reaches the hard drive or is there a way of sending it as one whole file.
I think I am having a problem due to an NFS server file size limit. Is it possible I am missing a parameter on the RHEL NFS setup to handle large files? I am running an NFS server on a RHEL 5.1 machine and the HP-UX 11.0 machine does an NFS mount to that file system. The HP-UX executes a program that resides on the HP-UX machine to process a large 35 GB data file that resides on the NFS server machine. The program on the HP-UX can only read/process the first portion of the file until an "RPC: Authentication error" is returned multiple times until the program prematurely decides that it has reached the end of file.
I tried recompiling the same program to run on the RHEL 5.1 NFS server to access the 35 GB file locally (on the NFS server instead on HP-UX) and the program completed successfully, processing the whole file (about 7 hours of processing) with no "RPC: Authentication error." In addition, I have been running the nfs mount with the same machines for quite some time, but not with such large files sizes.
I'm planing to copy a productive mysql innodb file from one server to another, and the file size is around 300GB. As the file is keeping changing all the time, I have to shutdown mysql instance and copy the large data file to other server as quickly as possible.I should have to find a way to speed up file copying ... I'm wondering whether there's a way to copy file block by block.If the destination side block has same content, then bypass it.
I have 2 computers on the same network that i need to link together to transfer files 1 is a web server the other is a minecraft server. the problem is that the file transfer will be constant as the minecraft server will constantly updates files on the web server and I dont want it to go to the router then to come back to the web server. I want to add a second network card to each computer and link them together and use this second connection to transfer the files is it possible?
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 2.6.29.5-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
I have login of two ftp servers and I want to transfer the data from one ftp server to another ftp server. How can I do that, without downloading to local and then upload to other ftp?
I am using samba 3.0 on rhel-4, it works fine i am getting the share folder in linux and windows xp as well with username and password for first time only then for next time it will not ask me the username and password for samba user. Actually i want to fix idle-timeout and password authentication for user every time the user want to access the samba share folder.
I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send:
[Code]....
The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it:
setting up a basic apache webpage that would allow users to view files and folders on a server and transfer it from one directory to another? I know it doesn't sound very secure but this is an internal server so security is not a big issue as it is pretty secured in its network already. We have users that need to move files from one directory to another and we'd like to give them some intuitive interface to do this rather than having them log in to the linux system and run mv/cp commands. They are not very linux saavy.