General :: Test Data Transfer From System To Its Box Pc?
Mar 6, 2010
I build a new test machine where i need to bring data from live machine. the data is kind of flat files and some propreitary application axigen mail server.
now what i am suffering from, which commands to do first practise. there is remote site with 1mps speed of wireless between live and test machine. on daily basis aprox 14gb .tar.gz files it need to move it.
i found scp,rcp,rsync,sftp etc. which is fastest way to replicate or copy to remote machines.
the data is on live machine /var/opt/application and on remote same directory too /var/opt/application
i try using scp it take aprox 8-10 hours to copy single 14gb file.
if possible where to see such commands logs results, if anything get down error discontinue while copying.
I'm looking for a most possible, secure solution to transfer data using rsync over Internet between 2 linux server. I have 3 option: SSH, IPSEC and Kerberos. Which one in your opinion should be most secure solution?
For example I am copying data to USB-flash drive using some file manager. When the file manager shows that transfer is complete, flash drive indicator continues to light. As far as I know this is some kind of caching system...
1. Is it OK to close file manager when transfer window is closed but the flash drive indicator continues to light (data is still being copied)?
2. Is it better if I turn off this caching technology?
I would like to transfer the data from Palm desktop on my windows vista laptop to my Linux netbook. The pda has stopped working so I cannot sync directly from pda to either computers.
Is this possible and if so what software do I need for the Linux and how do I do it, I would need step by step instructions as I am new to Linux and really only use it for web-browsing and skype.
I configured non-anonymous ftp server in my Ubuntu 10.04.it's working downloading and uploading through thrid party software like filezilla.Now i think that without using any other software i want to upload and download the ftp content in browser it'self.i heard that using webmin i can upload and download ftp data sharing through browser.
I am using an embedded platform in which I have connected an external harddisk (/dev/sda). The SCSI driver is present and I am using the SG_IO interface for performing the SMART commands with the Hard Disk. (Unfortunately not all the HDIO ioclts are present. So I opted for the SG_IO ioctl). But the data transfer (reading/write data from/to sector) is not working with the SG_IO ioctls. So I searched for some other options. Later in one of the places, I found that we can actually mount the /dev/sda to some mount point in /mnt and then make a XFS file system (mkfs.xfs) of this.
And then we can create the directories and do file operations on this mounted directory. Here the simple read/write systems calls can be used for this. I was thinking about this implementation. But I am confused how I can map the actual LBA (Logical Block Address) to the device file offset. I mean if I want to write to the sector 5, there will be a LBA for it. So I can do lseek on my device and then write the data there. So how the mapping between LBA and device file offset can be calculated.
So I have a system that is about 6 years old running Redhat 7.2 that is supporting a very old app that cannot be replaced at the moment. The jbod has 7 Raid1 arrays in it, 6 of which are for database storage and another for the OS storage. We've recently run into some bad slowdowns and drive failures causing nearly a week in downtime. Apparently none of the people involved, including the so-called hardware experts could really shed any light on the matter. Out of curiosity I ran iostat one day for a while and saw numbers similar to below:
[Code]...
Some of these kinda weird me out, especially the disk utilization and the corresponding low data transfer. I'm not a disk IO expert so if there are any gurus out there willing to help explain what it is I'm seeing here. As a side note, the system is back up and running it just runs sluggish and neither the database folks nor the hardware guys can make heads or tails of it. Ive sent them the same graphs from iostat but so far no response.
I need to test a code that needs to have high DiskIO and DiskBisy in a Linux environment. Is there any way that i can use to test this urgent need of mine.
I have logged into one Linux VM and I want to test remote access to another Linux server on the same network but I cannot recall the Linux application to call?
I'm working on testing some software, and I have a question. We have several files of binary data that we need to push through our application to test. It communicates via simple TCP sockets. Is there a way I can send this data to the socket from the command line? I tried doing something like this, but telnet never picked up the data.
I've been looking for a good data integrity test tool for linux, but I'm having trouble finding one. Basically I'm looking for an application that will generate a heavy I/O load to a raw device and then perform some kind of data verification on the device. I my case the raw device will be md raid5 array.
Recently I had a situation where I had to transfer data from a PC to a Mac. Usually we simply convert the data as necessary, ie. bookmarks, contacts, etc. move the info to a dropbox account and use Windows networking to access the dropbox from a Mac. Well, there was a network issue and it was suggested that I boot to Ubuntu and do this. We do have Ubuntu on a flash drive but I was lost from there. Can anyone point me in the direction of documentation or assist me in understanding how this would be accomplished? I have a Mac, PC and Linux box at home so hopefully I have all the necessary tools to test this out.
I am quite newbie with Linux networking and I want to know if there exist a log in Linux where errors during transfers are stored. I mean, when I'm transferring data between two hosts is there any way to log somewhere (where?) the errors or warning during the transfer? That is, connection failures, TCP or UDP errors, etc.? Is there any "log level" to select the errors to be displayed in the log not to work with a huge log file?
I'm going to be needing to copy over 2 terabytes from one drive to another in a couple of days and I need something that I can use to transfer the files with CRC checking. Something comparable to Teracopy in Windows.
Open Source is an absolute must. I'm running on Debian and getting away from Windows to get away from all of the proprietary software. If it's going to be proprietary I might aswell just boot back into Windows as far as I'm concerned.
Some of the features that would be nice but not needed
-GUI (preferably gtk) -Pause/Resume (it's going to be a long transfer so it would be nice.)
Those would be nice but CRC checking is an absolute must have for so much data.
I have created mobility of 20 nodes and vbr traffic using following attached script I executed the file as ns234 vbr.tcl I got the vbr.tr and vbr.nam but I was unable to load the graph using matlab <trgraph> I thought problem with is vbr.tcl script.
when I try to transfer data from my laptop to the desktop using SSH I get the following massage:
Code: :~> ssh nobani@192.168.1.3 The authenticity of host '192.168.1.3 (192.168.1.3)' can't be established. RSA key fingerprint is f3:94:fa:7c:f1:92:07:45:3c:5b:99:51:dc:b7:a1:ff.
Are you sure you want to continue connecting (yes/no)? I wonder is it normal massage or is there something not good?
How can I transfer data from one user account to the other? I wish to delete one of the accounts but before that I want to transfer all the data in that account to the other account so that my data is saved.
I am running ubuntu 10.4 lts desktop and ubuntu 9.10 server with the gui active. On both of these machines the data transfer rates over usb and lan have slowed down quite considerably. Also over multiple devices, not the same hardware everytime I try to transfer something.
For the past few days, i have been trying to fix a common error related to mtp devices such as creative zen. Because of the new updated libmtp8 library, it is impossible to transfer files into any mtp devices. However there is a fix that will remove this issue.
1. Get the older karmic version of libmtp8 from here: URL... 2. Install this package from terminal using: sudo dpkg -i libmtp8*.deb 3. Lock libmtp8 from the synaptic package manager, so it doesn't get updated. 4. Uninstall and then downgrade any mtp packages (through packages.ubuntu.com) that become broken after step 2 to the karmic version. 5. Enjoy.
i have got a laptop for the first time that i'll be taking to college. I have a lot of data on my PC which i want to transfer to my laptop. My computer has Lucid 32bit while laptop has 64 bit installed,Data is too much and transfering via usb drives will take a lot of time.. I do not know how to set up a network.
I need make the copy of my images system on the second PC. But I can`t transfer its data. It seems that the way to images is stored absolutly, for example /home/john/images/holiday. If the second user`s name is Marry, the way will be /home/marry/images/holiday and Shotwell don`t accept this.