General :: Transfer Large Number Of Files Host To Host
Oct 20, 2010
I have two servers, one has an empty / and the other has a subdirectory with a large number (4 gig) with many, many files. I need a way to transfer the files en masse from the server with the large number of files to the one that is essentially blank.I don't have space on the used host to simply gzip all the files. I've googled this and see that there may be some combination of tar and/or gzip that will let me do this with some sort of redirection.
I really need and example line of how this can be accomplished. If my explanation seems rather sparse, I can supply more details.
I am working on a cluster for a molecular dynamics class and I have to edit my FORTRAN code (only the newest and best for me!). In order to get through to the cluster I have to ssh in. The network on which the cluster resides is behind a firewall, so I have to ssh through the firewall into the network first.
this is fine, I can login and move files and folders as needed, including sftp-ing into host 1, then into the cluster so I can transfer files from cluster to host and then host to me. This gets rather tiresome, so it would be nice to edit the files in place.
The problem is that when I access my code with emacs it launches the emacs client on Host 1, with no mouse support. I know the purists will howl about how I should be using keyboard shortcuts, but I am a chemist and not a programmer, so the mouse is very nice for me. Is there any way I can perhaps mount the cluster using sshfs so that when I open my code it launches a local instance of emacs? Sorry if this is the wrong forum, but I thought it was network related.
I'm trying to ssh from my laptop to my desktop (both fedora 14) over a local network. I can ping my desktop and get responses, but if I ssh to it, I receive
ssh: connect to host 192.168.100.xxx port 22: No route to host
I've a directory containing around 2.8 lacs of files. I want to move them to another directory.If I use cp or mv then I get an error 'argument list too long'. If I write a script like
for file in ls *; do cp {source} to {destination} done
then because of ls command , its performance degrades.How can I do this?
I am facing problem in copying a large number of file 18 lakh (18,000,000) files from my personal hardisk to another hardisk each file is very small and size of folder is around 3.95 GB copying files using copy given by Windows is frustrating and I am not even able to compress file its giving me error that its not readable.And problem is I am not able to open this drive in Linux it showing me error there saying do diskchk in Windows and Windows disk check is also not able to repair this drive and goes into some mode unsolvable.Is there any way to open disk with error to open in Windows and if not any way I can copy data faster?ERROR: Disk labled EDU is corrupt go to windows and chkdsk /f there and reboot into window 2 times.
I understand that chroot is usually used to provide security, however, for my issue, security is a big don't care. I am very new to using chroot and don't fully understand how the chroot'd env works.
problem: Trying to use a vendor supplied cross compile environment. The environment runs as a chroot'd env and works just fine. I have a large number of additional modules that I wish to compile in the chroot'd environment. FYI, these modules are also (succesfully) compiled for other targets not using chroot'd env's. Copying the source files into the the chroot environment is not an option (don't have hours to wait for copies to finish and it would break the make system). Having them live in the environment is also not an option (the chroot build is a tiny part of the build process and we cannot revamp our entire source tree to accommodate it).
I am looking for a way to have the compiler in the chroot'd env have access to a path that is outside of the env and typically higher up in the same path that holds the chroot'd env. I have tried soft links (they don't work as expected). Hard links only work for single files and there are 10's of thousands of files that would need to be linked. I am not sure how I would go about exporting the additional files and then mounting the exported files in the chroot'd env (or if that would even work).
On my bare metal server, I get about 130MB/s read from a software RAID 10 array, but when reading the same file from a VM via NFS over the VirtIO interface,I only get about 40MB/s.
Furthermore, the process for the VM uses >180% CPU on the host, and ~40%, and the 5 min average is ~1.5 on the host and guest. I have dual E5620's so I'm disappointed that the transfer is so slow, as I was expecting at least 90MB/s.
I'm new to being a sysadmin, so if anyone has some tips I can use to increase the transfer rate, and possibility reduce the CPU load as well I'd appreciate it. I'm assuming that 130MB/s is the max speed of two 7.2k HDDs, but if there's any way I can squeeze any more out that would be great too.
System specs:
2x Intel Xeon E5620s @ 2.40Ghz 8GB of RAM @ 1066Mhz 4x 1TB Western Digital Black HDDs in RAID10
In /proc/scsi/scsi, we can see the scsi host no. (scsi identifier no.). for e.g. Host: scsi4 Channel: 00 Id: 01 Lun: 00 In above e.g., scsi identifier no. is 4. Whenever we modprobe a particular module associated with scsi devices, we can see new entry in/proc/scsi/scsi with a greater identifier no. When we do rmmod on that module, the entry from /proc/scsi/scsi wipes off but still the counter of scsi identifier no. doesn't decreases. is is there any way to reset or decrement this counter so that next time when I do modprobe on scsi related module, it will assign no.s starting with 0 ? I found that during registering a scsi device, scsi_register() method gets value from "next_host" (which is a static int initialized to 0) and then increments next_host counter. Also, during scsi_unregister(), it decrements next_host counter. rmmod internally calls scsi_unregister(). So, if it is true then why the scsi host id. doesn't decrement during rmmod?
Many of mails sent from my mail server that are in Queue;The main reason is deffered by domains like yahoo,aol,etc.but there is one more error that i keep getting and that is Host Unknown,Below is an example from mail log,The catch is,test mail sent on the same email id sent from my personal mail from the same server i.e. url was deliveredHowever,another mail containing client information sent from customercare@mycompanysdomain ended up in queue.
There are more examples of the same,around 20 domain have the same problem.
wanting to set up a website, I have a www folder with stuff in it, I'm on Ubuntu 10.10 I wanna port forward so that i can put my www files online and get a domain name,
I am using the diff command with the -r option, to compare a large number of files and files in subdirectories. My main interest is to find out which files have been changed, and not what the actual changes are, and since a lot of files has been changed, it would be a lot easier to view the file names only. Is there and option for diff that might do this, or does there exist a similar tool/command that could do the job?
I have ubuntu server as a guest in virtualbox and xubuntu as my host os. I have a shared folder on my host that can be accessed by my network. I'm trying to use my server without the need to install a desktop, so how would i transfer a file from my host machine to my ubuntu server guest?
I got a bunch of machines (~10) that I share with my co-workers. I have appropriate .ssh file(s) set up so I don't get prompt for password when I try to ssh.Currently I ssh into these hosts and then do a top to check the load before I start using the machine. Because I don't want to be on a busy host.Can someone show me how to write a script that find a least-busy host given a list of hosts to check? (hardcoded is fine)
Bit of an odd one, this. I've migrated a website from my old server to a new machine. Both servers run Ubuntu + Apache2. Both only serve a single site, apart from the default site.I've flipped the domain name to the new IP address.The trouble is that after moving the virtual host config over into sites-available, with the necessary link in sites-enabled, Apache attempts to serve from the default web root (/var/www) rather than the actual site content (in /var/www/technology). So for example, an attempt to browse.
I'm trying to get Synergy up and running between my Windows 7 (server) host and my Arch Linux (client) host. In rare exception, synergy works perfect on my windows host, however every time I try and run Synergy on my linux machine I get the following error in messages.log:
[code]...
I'm running Arch with a barebones Xorg install and SLiM with LXDE. I'm not sure what in the world is causing the problem and haven't been able to find anything of substance in a search.
I need to delete all files inside remote directory using ssh P.S. The directory must not be deleted, so @Wes answer is not what I need. If it would be local dir, I would run "rm -rf dir/*"
The internal network is behind nat done by the PC Router.The TP Link is recieving wireless signal from outdoors and it has switching and basic routing capabilities. I'm using the PC router for better routing options.PC Router (or R for short) is a triple-booting machine - Linux, FreeBSD and Windows. It has two lan cards - external (ext_if) - 100Mbps Realtek 8139 and internal (int_if) - 1Gbps integrated Realtek 8169.The problem is that all traffic from R to the network is slow - about 5-20K, while the traffic in the oppoiste direction is all right - about 10MB that is fine for 100Mbps cables, NICs and switches. The problem persist no matter the OS the pc R is running.I've tried some debugging on the situation as follows:
- put another PC at the place of R - everything is fine. That exclude the possibility of damaged cables, RJ-45s, switches and etc. - connected both of the NICs to the Internet while the internal network is being disconnected and they both work fine (no delays) - traffic shaping is not running - there is nothing in firewalls except NATing the internal network (and it is working fine). Actually these firewall rules have been operational for more than months and everything was fine untill a week or two ago. - changed the internal NIC with another - connected the internal network directly to the TP and all of the PCs are getting good network performance. Then connected the R machine to the TP as well and there was good performance between the internal network PCs and R. - R has good performance to the TP. In fact everything has good performance directly to the TP (when not connecting trough R). - the problem persist only between R and machines from the internal network.
My RHEL version is Red Hat Enterprise Linux AS release 4(Nanhant update 6) I installed Linux on vmware.My host is win xp.Am able to ping guest as well host.How i can copy the files from host to guest.
I would like to transfer my music library and movie collection from my Desktop computer running Windows Vista and my laptop running Debian Squeeze. I have the laptop connected via wireless but it's possible to connect the two either directly with a CAT5e cable or through the router. I'm just wondering what the best way to do this would be.
I have Fedora 12 (with all the latest patches, including the 2.6.31.6-162 kernel) installed on a new Supermicro SYS-5015A-H 1U Server [Intel Atom 330 (1.6GHz) CPU, Intel 945GC NB, Intel ICH7R SB, 2x Realtek RTL8111C-GR Gigabit Ethernet, Onboard GMA950 video]. This all works great until I try to transfer a large file over the network, then the computer hard locks, forcing a power-off reset.
Some info about my setup:
[root@Epsilon ~]# uname -a Linux Epsilon 2.6.31.6-162.fc12.i686.PAE #1 SMP Fri Dec 4 00:43:59 EST 2009 i686 i686 i386 GNU/Linux [root@Epsilon ~]# dmesg | grep r8169 r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
[code]....
I'm pretty sure this is an issue with the r8169 driver (what I'm seeing is somewhat reminiscent of the bug reported here). The computer will operate fine for days as a (low volume) web server, and is reasonably stable transferring small files, but as when as I try to transfer a large file (say during a backup to a NAS or a NFS share), the computer will hard lock (no keyboard, mouse, etc.) at some point into the transfer of the file. It doesn't seem to matter how the file is transferred (sftp, rsync to NFS share, etc.).
I am a bit of a n00b when it come to linux but I am setting up a test environment were I have a appliance monitoring network traffic. Part of my test requires me to copy a file via RCP from one host to another host. I have two ubuntu boxes. I have allowed the subnet in the etchost.allow for ALL. I have installed rsh-server
When I try to copy the file it looks like it tried to use SCP instad of RCP because it connects to 22 instead of 544. Also note that traffic must be unecrypted thus me trying to use Is there anyway to make ubuntu go old school to allow me to use rcp instead?
Code: testuser1@ubuntu:~$ rcp /home/testuser1/test.txt testuser1@10.46.41.38:/home/testuser1 ssh: connect to host 10.46.41.38 port 22: Connection refused lost connection testuser1@ubuntu:~$ rcp usage: scp [-12346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file] [Code]....
I have installed CentOS 5.4 machine named test.example.com (192.168.1.1)File /etc/hosts contains:127.0.0.1 test.example.com test localhost.localdomain localhostI have read that the loopback addres should not be assigned to host name,only to localhost and the host name should be assigned to 192.168.1.1, like this:127.0.0.1 localhost.localdomain localhost192.168.1.1 test.example.com testIs there any reason why it should be one or another way?
how to transfer large files from my laptop to external hard drive. Problem occurs when I'm sending Blu-ray films (4.4GB) to external, gets to 4GB and then comes up with error. Is there any way of breaking it up and then merging when it reaches the hard drive or is there a way of sending it as one whole file.
This problem is not exclusive to Ubuntu, I've experienced it in Windows and OSX as well, but it seems that almost every time I transfer a large number of files (i.e. my music collection) between my desktop computer and laptop via my external hard drive, I end up losing files for no reason. I usually don't notice the files are missing until later on, because I am never informed of any data loss. Now, every time I make a large transfer of files, I just do it two or three times to ensure that I don't lose any files.
We recovered a large number of files from a HD I messed up. I am attempting to move large numbers of files of a type e.g. .txt .jpg , into a folder by type to more easily sort through them.
Here are the commands I have mainly been trying with various edits:
Code:
Code:
So far the most common complaint I have gotten "missing arguments to execdir".
I'm trying to extract the sender id from a fairly large number of files and am having trouble assigning variables from a file. Here is what I have so far, (which is fairly kludgy I know, but it's been some years since I've done any scripting or programming, and I find that I have lost the knack to a large degree).