Software :: Transfer File From One Server To Other By Splitting Then Joining
Apr 8, 2010
I'm trying to transfer a large .tgz file from a CentOS dedicated server to a linux webhost (unknown OS). The problem is the webhost will not allow a 1.1gb file to be uploaded, however it will allow the upload in 149MB chunks. I used the split command to segment my tgz into 7 segments under 150mb. I then uploaded all segments via FTP which worked. Then I tried to join the segments to create the original tgz. The join appears to work with no issues. However, when I try to extract the tgz it appears there is a problem, most, but not all files are extracted and there is this error message:
Code: gzip: stdin: Input/output error
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now It appears the join did not work and the tgz is slightly corrupt. What am I doing wrong? Here's the commands I'm using:
1. Create the original tgz on the dedicated server
Code: tar -czf mysite.tgz ./myfolder
2. Split the tgz into segments
Code: split -b 149m -d mysite.tgz seg
# using the -d switch so the segment files use a numerical suffix
# I now have these files:
seg00
seg01
seg02
seg03
seg04
seg05
seg06
seg07
3. Transfer segments to the other webhost using FTP
Code: # hand typing (not a script)
ftp ftp.mysite.com
myusername
mypassword
binary
cd somefolder
put seg00
put seg01
put seg02
# through to seg07
4. Join up the segments on the new webhost
Code: # this is in a .sh script file
cd /full/path/to/somefolder
cat seg* > mysite.tgz 5. Extract the new tgz
Code: # this is in a .sh script file
cd /full/path/to/somefolder
tar -xzf mysite.tgz
# the above error is now thrown.
That's it. What am I doing wrong that's causing the above error?
I have a file of size 6GB. I would like to find an utility which will split and join in both windows and in Linux. I know Linux has native split and cat for this purpose. My idea is, if I give this split files to anyone, they should be able to join them either in Windows or Linux. Is there any utility to do this job?
i am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
[root@sa1 logs_os_backup]# less remote.log Wed Mar 10 09:12:01 AST 2010 building file list ... done bkup_1.tar.gz deflate on token returned 0 (87164 bytes left) rsync error: error in rsync protocol data stream (code 12) at token.c(274) building file list ... done code....
I am splitting a file based on the values read from an input file. The below one is the script.
1)How do I add the header which is present in the original file to the new split files created?(For eg. pharmacyf conatins header as table column names. The new files created (ODS.POS.$pharmacyid.$tablename.$CURRENT_DATE.dat) are without the header).
2) Also the script is creating 0 byte files for the pharmacyids which are not available in the intial file? Can this be avoided?
for pharmacyf in * do tablename=`echo $pharmacyf |cut -f4 -d'.' ` while read pharmacyid do grep -w $pharmacyid $pharmacyf >> $OUT/ODS.POS.$pharmacyid.$tablename.$CURRENT_DATE.dat done< inputfile done
I have 2 computers on the same network that i need to link together to transfer files 1 is a web server the other is a minecraft server. the problem is that the file transfer will be constant as the minecraft server will constantly updates files on the web server and I dont want it to go to the router then to come back to the web server. I want to add a second network card to each computer and link them together and use this second connection to transfer the files is it possible?
I have a text file that is filled with references to duplicate files. I'm trying to create a text file for each duplicate file found that contains the paths to the duplicates. I would also like the text file names to be based on the size and file name.
I have login of two ftp servers and I want to transfer the data from one ftp server to another ftp server. How can I do that, without downloading to local and then upload to other ftp?
# lvs -a -o name,copy_percent,devices LV Copy% Devices data 100.00 data_mimage_0(0),data_mimage_1(0) [data_mimage_0] /dev/dm-11(0)
[Code].....
My goal was to remove devices dm-10, dm-11 and dm-14 from my VG, so I decided to extend the VG with dm-16 and mirror my data LV to that device, then split that mirror from the other devices. The FS is mounted and need to stay online...
setting up a basic apache webpage that would allow users to view files and folders on a server and transfer it from one directory to another? I know it doesn't sound very secure but this is an internal server so security is not a big issue as it is pretty secured in its network already. We have users that need to move files from one directory to another and we'd like to give them some intuitive interface to do this rather than having them log in to the linux system and run mv/cp commands. They are not very linux saavy.
Having a bit of a issue with Debian Squeeze and transferring files to the Sony PSP..Hook up PSP to USB port and Debian mounts it..I go to drag a 125 meg mp4 to video folder..Copy windows takes about 10 seconds to transfer it..Exit USB mode and there is no video there. Go back into USB mode and look at video folder on the PSP memory stick and there is no video..It vanished. From another after copy progress closed I right clicked PSP and unmounted it..
It error-ed saying device was busy and could not unmount..Looking at light on PSP i see memory stick is still being written to..i wait for light to stop flashing..About a minute or so..Then am able to unmount it..Go to PSP video and theres the video ready to be watched. Debian isnt accurately showing the copy progress...Its showing complete when it isnt..I have to watch the light on PSP to know when it is truly finished.
I have a utility that works with files. The utility is crashing at after about 120 files. The input to the utility is a file containing a filelist. I want to cut the file with the file names in it to seperate files containing about one hundred or so. My thought was to determine the number of lines/100 and then use head and delete to create temporary files to run the utility multiple times to prevent the crash. When I tried to create a variable using the wc -l command the output gives me the number of total lines but it also includes the filename of the input file. (873 Filename.txt) I can not figure out how to remove the Filename.txt from the variable.
i am new in tcp/ip.i want to write a program using c for file transfer where FTP client and FTP server will be used.and also this program should work for ipv4 as well as ipv6.and muiltple client can be connect simultaneously.i dont know how to start program.should i use shell script or socket programming for file transfer?can we use FTP client and FTP server in socket programming?
if connecting to my server for file transfer using gFtp is secure. I told gFtp to connect to the server using SSH2 and it works. It says it uses this command "ssh -e none -l wordpress -p 1883 IPADDRESS -s sftp." Is this more or less secure then using ftpes or ftps? What I thought was weird was that I could shutdown vsftpd and still connect. Does SSH2 SFTP use its own ftp server?
I have two questions. 1- How can I set up FTP server for the first time on the Centos? 2-I want to give the ftp user full root access in the directory of /var/www/html so he would be able to upload or download files and folders without getting "FTP Critical file transfer error". From command prompt how can I give the user test root access in the /var/www/html with all the folders, sub folders and files in one shut?
I have a ubuntu server 10.04 with LAMP installed. I also have ubuntu 10.10 on a laptop and can copy files to the server fine. To keep my website uptodate, I usually use Filezilla without any problems. I have just installed Fedora 14 on an old desktop and set up "my stall" ok. The problem is that I cannot copy any files from Ferdoa to the server due to:-
Response: 550 Permission denied. Error: Critical file transfer error I have tried to change the directory on the server "/var/www" using chmod -R 775* and chmod -R 777*, but it makes no difference, the file transfer still fails.
I have a centos server installation running, and have installed and configured vsftpd. FileZilla works great. I am able to connect and transfer files both ways. I used this just for testing purposes.
What I need to do is get Fling File Transfer working. I can connect to vsftpd with Fling, but that is as far as it goes.Sep 20 11:18:44 ftp vsftpd[28286]: warning: can't get client address: Socket operation on non-socketSep 20 11:31:03 ftp avahi-daemon[2240]:
I have a very nice SUSE 11.2 Samba PDC that runs well with Windows XP clients. I am using NETBIOS for name resolution since I dont want to put in a DNS server because my router already has one, Im pretty sure it would make things more complicated. I enabled wins support in smb.conf and made the name resolve order with lmhosts first. lmhosts lists all the ip adresses with their computer names in capitals. I hope thats right. I set up my windows 7 with the reg file from the samba wiki on windows 7 [URL].
That's great now I get the old screen from XP in windows 7 when joining the domain. I gave the machine netbios name MAINPC a smb trust account MAINPC and added the unix user MAINPC$ that should all work. I manage to successfully join it says welcome to domain, afterwards an error appears "changing the dns name of this computer to "" failed" and something bout not finding the domain controller. although I joined. then I resatart and when I try to log on it says "trust relationship failed". How to make it join and logon properly.
I have 6 RHEL 5 Servers, 1 5.2 32-bit Master Login Server, which services the other 5 RHEL 5.3 64-bit App Servers, for Login and Authentication. I am wanting to intigrate these with my Windows AD. I use Windows Server 2003 R2 Standard. I currently have the RHEL servers setup to have the 32-bit (5.2) server as the NIS Master, serving NIS out to the remaining 64-bit(5.3) servers.
I also have a Windows Server 2003 R2 Domain Controller serviceing my Windows AD. I would like to be able to ingrate them to be able to use Password Sync and Single Sign-on. I am not real worried about having Kerberos or LDAP running, because the systems are NOT physically connected to any external source. (The network is completely self-contained). I am just wanting to be able to use Password-Sync and DNS between the different networks.
My questions are as follows: 1) Would it be better/easier to make the Windows Server the NIS Master or the RHEL 5.2 Server? 2) If I make the Windows Server the NIS Master, how would that effect the remaing servers who get their NIS info from the Redhat Master? 3) If I keep the RHEL Master as NIS Master, how would I intigrate that with AD and have both shared Passwords and DNS?
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
Code:
auto lo iface lo inet loopback # The primary network interface #auto eth0
[code]....
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
I have an OpenSuSe Server configured with DNS, Samba (PDC + WINS), LDAP, Squid All this is in a hybrid scenario with other OpenSuse acting as clients and some Windows 7 also as clients. Everything works perfect. Both systems are able to join and authenticate in the Samba server very smoothly.
My problem is that in my workspace I have several different subnets/VLANS. So I have another OpenSuSe client here that needs to join the domain and authenticate with the samba server, but he just cant find it via the Windows Domain Membership setup screen (where I usually configure the others).
The server can pe pinged, and it does resolv local domain names. It seems the problem is that I have no place to configure a PDC/WINS server in Linux Client. It only asks me for the domain to join, and then it doesnt find it (Im guessing this happens because it cant receive the broadcasts from the server network).
Is there any way to declare the Samba/PDC/WINS server on the client side?
I have configured openssh 5.8p2 with centos 5.6. My sftp is working fine with chroot environment but i am having problem with SCP. I am dealing with muliti Redhat servers. When i try to transfer data from other linux server through scp it gives connection refused. For e.g ssh 5.8 is configured on new server and i want to transfer files from old server which is using openssh 4.3 version.i created same username and password on new server as on old server.My sftp users on new server has no shell access but only sftp access. When i try to scp from old server to new server it gives error connection refused. Is the below configuration only for sftp and can't scp? According to google the configurations i found are for scp and sftp. Do i need to generate ssh keys by giving users on new server shell access, once created then stop shell access again, as i dont want to give shell access permanent for security reasons? but i want to use ssh keys for more security as well.
Port 22 PermitRootLogin no 1.override default of no subsystems[code].....
I run a webserver with centos 5 and like to change hardware. I run a 1u supermicro 6014t server with 4 *500gb raid 6 and like to downgrade to a smaler but more efficient server. The problem is this. I'dd like to transfer all the content and the whole OS to the new system, but how do I do that
I have setup ssh between a F15 box and a remote centOS box. I am using ssh -X, then nautilus/gnome-session to open a gui file browser/desktop environment of remote machine. But anything I want to copy from remote machine to local machine by gui, is showing 'path not found' error. CLI work just fine but is it possible to transfer files between remote and local machine by gui over ssh?