I build a new test machine where i need to bring data from live machine. the data is kind of flat files and some propreitary application axigen mail server.
now what i am suffering from, which commands to do first practise. there is remote site with 1mps speed of wireless between live and test machine. on daily basis aprox 14gb .tar.gz files it need to move it.
i found scp,rcp,rsync,sftp etc. which is fastest way to replicate or copy to remote machines.
the data is on live machine /var/opt/application and on remote same directory too /var/opt/application
i try using scp it take aprox 8-10 hours to copy single 14gb file.
if possible where to see such commands logs results, if anything get down error discontinue while copying.
I have one Linux server equipped with WiFi . I want to measure data rate speed on this connection . Is there any utility on my Linux that can measure data speed on one specific Ethernet connection when transferring large size files through WiFi connection?
I'm looking for a most possible, secure solution to transfer data using rsync over Internet between 2 linux server. I have 3 option: SSH, IPSEC and Kerberos. Which one in your opinion should be most secure solution?
I have 512 kbps broadband.I want to find out the peak download speed I get.For example: during day time the speed is relatively low. At midnight the download speed rises abruptly. In windows xp I had netmeter which will save the peak value whenever it occurs. I don't want to find average speed.I have installed wireshark & iftop.How do I use them ?
For example I am copying data to USB-flash drive using some file manager. When the file manager shows that transfer is complete, flash drive indicator continues to light. As far as I know this is some kind of caching system...
1. Is it OK to close file manager when transfer window is closed but the flash drive indicator continues to light (data is still being copied)?
2. Is it better if I turn off this caching technology?
I configured non-anonymous ftp server in my Ubuntu 10.04.it's working downloading and uploading through thrid party software like filezilla.Now i think that without using any other software i want to upload and download the ftp content in browser it'self.i heard that using webmin i can upload and download ftp data sharing through browser.
I have the URL of some streaming audio. How do I discover the data type or other details so that I can use console terminal tools to record or otherwise rip the audio?I am trying to time-shift record the stream from a local radio station much like one does with a DVR or TiVO for television programs.
I am using an embedded platform in which I have connected an external harddisk (/dev/sda). The SCSI driver is present and I am using the SG_IO interface for performing the SMART commands with the Hard Disk. (Unfortunately not all the HDIO ioclts are present. So I opted for the SG_IO ioctl). But the data transfer (reading/write data from/to sector) is not working with the SG_IO ioctls. So I searched for some other options. Later in one of the places, I found that we can actually mount the /dev/sda to some mount point in /mnt and then make a XFS file system (mkfs.xfs) of this.
And then we can create the directories and do file operations on this mounted directory. Here the simple read/write systems calls can be used for this. I was thinking about this implementation. But I am confused how I can map the actual LBA (Logical Block Address) to the device file offset. I mean if I want to write to the sector 5, there will be a LBA for it. So I can do lseek on my device and then write the data there. So how the mapping between LBA and device file offset can be calculated.
I'm trying to write a shell script which finds bits of data from a text file. at the moment i'm using grep and basically i need a function which will look through the text file and take the data out of it. the file has days, months, years etc and i want it so i can type feb 06 and it finds all of the data for feb 06.
the problem i have is i can type feb and all the information comes back for feb, but i can't get it more precise e.g. feb 2009 and it finds just feb 2009, it seems to ignore that latter half. I've tried experimenting with egrep and having two inputs but i can't seem to fuse them together, it only takes the first input.
So I have a system that is about 6 years old running Redhat 7.2 that is supporting a very old app that cannot be replaced at the moment. The jbod has 7 Raid1 arrays in it, 6 of which are for database storage and another for the OS storage. We've recently run into some bad slowdowns and drive failures causing nearly a week in downtime. Apparently none of the people involved, including the so-called hardware experts could really shed any light on the matter. Out of curiosity I ran iostat one day for a while and saw numbers similar to below:
Some of these kinda weird me out, especially the disk utilization and the corresponding low data transfer. I'm not a disk IO expert so if there are any gurus out there willing to help explain what it is I'm seeing here. As a side note, the system is back up and running it just runs sluggish and neither the database folks nor the hardware guys can make heads or tails of it. Ive sent them the same graphs from iostat but so far no response.
I copied a back up of my windows 'my documents' fold and all of its' sub folders into my linux (Mint Debian) Documents directory. I found that many of my files can be found in more that one directory so, what I want to do is to find all the dups and deal with them. Is there a good linux application to resolve this 'duplicates' problem. (I don't want to touch the linux system files.)
I have a 500gB USB drive connected to my laptop for backups and filestorage. But I can't get it to play nice with Midnight Commander. My transfer speeds max ut at 2MB/s wich is painfully slow when moving large files such as movies. Worker FM transfers the same files to the same drive much MUCH faster (not sure by how much, though). This leads me to the conclusion that the problem lies with MC.
I have two machines running open vpn connected trough a 8 port Gigabit switch (Tp-link TL-SG1008D). The problem is when I try to transfer a file the transfer starts at 13 mb/s and drops to 300kb/s. When I replaced the ram on the two machines and restarted the transfer the speed was 13mb/s constant. How does changing the ram makes such differences (at the first transfer they had 512mb at the second transfer the machines had only 128mb) ? Os Ubuntu Server 10.04.2 LTS
PS :Tried with 256 ram and the problem persists... Machines: two hp d530
slow usb transfer speeds has been resolved in 11.04 (Natty Narwhal)? I am currently using Lucid and facing the problem of slow usb transfer speeds which has always been there in ubuntu but everything else is running just fine for me so I see no reason to upgrade yet unless the new OS has solved this problem.
So what kind of speed are you getting with Samba over your local network? What speeds should I be seeing? I'm currently transferring a large amount of files from one computer to another. I'm taking everything off of a desktop drive on computer A and putting it on an IDE disk on computer B. Transfers are running at around 600-700 KB/Sec. I've seen moments, mostly when the transfer starts, where speeds were at 1000KB/Sec, but that lasts a very short while and then starts to "degrade" until it reaches 600+ KB/Sec. It then seems to level off there. Is this acceptable? Is this all I can expect to get out of a 10/100 home network? The current transfer is 2.5GB. Looks like it will take 1 hour+ to complete. Transferred 12GB last night. Was looking at 4-5 hours to complete so I left it running while I was sleeping. Personally, I think this is slow. I think it could be exponentially faster.
While I'm running these transfers I'm looking at some documentation on Samba speed tweaks. I've been adding little tidbits here and there to both smb.conf files. Some of it seems to help. Sometimes there is a noticeable difference in speed. Sometimes the changes actually cause degradation in speed. If you have a speed tweak that you would like to share the information will be gratefully accepted. Samba gurus welcome to reply. How do you set up Samba in an office environment? How do you set Samba up in an environment where performance is critical?
Maybe I should forget about Samba and try using a different transfer protocol? Am I expecting too much from Samba?I should stop before I really start to ramble. Anyhow, networking beats the heck out of the sneakernet, at any speed!As a side note, or maybe quite importantly, there is a router and a network switch (not a hub) involved here. Maybe something to consider?
what might cause a file transfer to start out fast (11-12 mb/sec) and then drop to about 1 mb/sec after having downloaded about 2 gb? This is what happens when I download files from my old fit-pc 1.0 (a small mini PC with 256 MB/RAM, an AMD Geode processor, and a 100 mbps network card) via ftp over my local network. Is it some buffer filling up?
I was directed to some article about "bufferbloat" earlier [URL], but I did not get any solution from that unfortunately. I have tried increasing the server's TCP buffer using these instructions: [URL] , but it does not make any difference. By the way, I am running Ubuntu Server 10.10 on the fit-pc and Ubuntu 10.10 desktop on the connecting client (which is just a normal Core 2 duo, 4 gb ram computer).
UPDATE: I should add that things were working fine when I was using ubuntu 7.10, but are not now that I have installed 10.10...
UPDATE 2: I just found out this is ONLY a propblem when transferring files to ubuntu. If I use winscp from Windows 7, the transfer speeds are fine...
UPDATE 3: It seemed it was gFTP that was the culprit. Transferring the files using filezilla instead in Ubuntu yields an excellent rate of 11.8 MB/sec. Anyone have an idea what might be wrong with gFTP?
I just noticed that files are transferring between my two computers at no more than 11.3Mb/s no matter what I do.
I regularly use sshfs to mount a drive on my main computer. But after noticing a dismal speed of 10Mb/s when copying a file, I thought maybe sshfs is slow. So I tried scp, and I'm getting 11.3Mb/s with scp. Using the blowfish cypher gave me no change to this, still exactly 11.3Mb/s.
I have a Gigabit network except for the cable, which is a 10meter, 100Mb/s cable. I have a custom modified sysctl.conf file which I'll post here if needed.
Is there anything I can do about this pathetic speed ? Maybe a modification to my ssh config ? sysctl ? anything ?
I connected a 16GB SDHD Class 4 card to my PC with a dongle reader. A class 4 card is supposed to support a minimum of 4 MB/s sustained speed. I grabbed a couple of large, 4.4 GB each, files and copied them to the SD card with Nautilus. The transfer rate started out at 60 MB/s and rapidly dropped to 50 then 40 and slowly dropped to 3.5 MB/s after about 3 GB transferred and it is still dropping.The PC will copy from drive to drive at 65 MB/s so I do not think the issue is reading from the hard drive.I am confused about the transfer speeds which Nautilus is showing. Obviously I am not really transferring to this card at 50+ MB/s. If I was moving data from hard drive to cache I would expect much more speed - I have 8 GB of RAM on the PC. That said, I have stopped the transfer and attempted to unmount the card. I get the message "Failed to eject media; one or more volumes on the media are busy." This has been going on for several minutes so some activity is going on in the background.
We have a server and we have instales an Open suse 10.3 on it. We created a Samba server also. Made to share folder, that we acces from network from other computers that have xp.
The problem is if we try to copy from server it is very slow only 100-300kb/s. The strange thing is that if i copy 1 file then its slow but if i start to copy another one the speed gos up to 10-15mb/s. Evry time i want to copy somethin or install from that server i need to start another copy. If i copy from a comp to that server the speed is normal only if i copy from server its slow.
I recently setup my two PC's for network file sharing using Samba. I notice the max speed I can transfer a file is 89kb/s instead of 100Mb/s. How can I increase the speed to max 100Mb/s? Both systems are running Ubuntu 9.10 w/Samba.