Debian :: File Transfer Will Be Constant As The Minecraft Server Will Constantly Updates Files On The Web Server
Aug 27, 2011
I have 2 computers on the same network that i need to link together to transfer files 1 is a web server the other is a minecraft server. the problem is that the file transfer will be constant as the minecraft server will constantly updates files on the web server and I dont want it to go to the router then to come back to the web server. I want to add a second network card to each computer and link them together and use this second connection to transfer the files is it possible?
I recently loaded up my old powermac g3 with debian 6.0 PPC, and it seems to be running quite good. I control it using ssh from my windows 7 box. I installed default-jre, so I could run the minecraft server on there.
I've got two questions: I installed Openvpn, but I'm a bit confused on how to use it.. I want people to be able to connect to my vpn network over the internet, what configuration should I use, and could someone maybe link me a decent step by step tutorial?
secondly, when I tried to launch the server, it tried to generate a new map, but this is taking ages! on my desktop computer, it only took two seconds, but after over half an hour, it only got to 20% of "preparing spawn area" what could be wrong with this? Any reason why the java virtual machine would have performance issues? I have no clue.. I haven't tried copying over my smp map from my windows box yet, and launching that.. but I doubt performance will be any better. (my windows 7 machine is hosting at the moment for about 10 people)
Im running a Debian 2.6.26-2-amd64 webserver with apache2 only on a Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz that has 2GB of ram in it. I installed htop a few days ago and have been looking on it for a few days now. the server idles on about 80~ish tasks constantly with arond 20 apache tasks/connections to it all the time and the cpu usages of the 2 cores at about 1% each. but asoon as more apache2 connections/task get started and the servers tasks reach 120-140~ Apache2 timesout if you try to go to the webpage i host on the server. when it's back down to around 80~tasks you can reach the webpage once again. why is this ? what's causing this to happen ?
When trying to transfer files from my server to my desktop, the files never get fully transfered. I have windows cmd running a ping to my server, and all is fine until i try to transfer files. Some will transfer fine, and then out of nowhere the server will stop replying. I then have to restart the server to gain access to it. A 2 GB file will transfer about halfway, and then the server will not be able to be pinged. I have static IP on the server and my desktop. Running XP x64 and Ubuntu server 32-bit 9.04. Tried using Samba and NFS Exports. I have both my desktop and server connected to the same gigabit switch, and then my switch connected to my router.
I have a server with Private IP and without any public IP. I want to transfer files to the private IP. I can log in to the Private IP through SSH. So basically I installed vsftpd in the server with Public Ip and tried to ftp the public ip from the private ip but it is not working.
Here's the system: 1 server running regular Ubuntu, 40km above the surface of the earth on a scientific balloon behind an iridium modem (RUDICS) connected to its serial port 1 server on the ground running Ubuntu server 1 intermediate server used for contacting the iridium system from the ground server
I'm not sure if all the above details are completely necessary but I included them for completeness. I would like to be able to log into the balloon server and transfer files in both directions. The procedure for connecting is to telnet to the intermediate server and then to issue some modem commands to call the balloon. The balloon server is set with getty running on the serial port connected to the modem. The way I have figured out to transfer files is to run kermit on the ground server and connect to the balloon server through the intermediate server, then run kermit on the balloon server, and set it as a file server with the server command.
However, there is some sort of timeout or something, and only a few kB of any file gets transferred before the connection is broken. After that it seems like the ground server is trying to get the file from the intermediate server (which has no useful files on it at all). The file transfer screen stays open and it keeps trying and trying to transfer, until I type ^C. I don't know if there is a way of detecting through a kermit command whether the connection is still open or if there is some sort of switch to make the transfer automatically stop once it has stalled.
I have been reading about No Kermit Server (NKS) protocol, which seems to be designed for a system like this where the connection is across a third server. Is this likely to do a better job of keeping the connection open and the file transfer going? How can it be implemented? Is there any kermit command to determine from the ground server whether the connection is actually still open? Is there any way of telling whether the connection goes all the way to the balloon server or whether it ends at the intermediate server? I actually just learned about kermit today.
On a related note, is it possible to have the balloon server running getty on the serial port but still have the port accessible for reading and writing by, say, a python script (which could use the modem to dial down to the ground when it isn't in use)? It doesn't seem to work but I'm wondering if there is a way. Is there a way to temporarily stop getty, then restart it, or is this potentially hazardous? Keep in mind there will be no way to contact it if something goes wrong since it will be 40km above the earth.
I need by searching this site so I haven't had a need to sign up since I can't really help anyone as of yet. With that said here is my problem: I'm running a VPS with CentOS RHEL 5 host-in-a-box, I just did a rebuild of the server and after a day or two pure-ftpd and sshd unexpectedly close out any incoming connections. I am the only one that uses ssh and ftp so I'm not sure what the problem could be. I checked the logs and there is nothing to do with not being able to bind on the address.
I tried connecting through ssh in verbose mode and it connects to the server just fine, but drops the connection before it asks me for my key pass phrase. If I enable password access it will drop before it asks me for it's password. I've tried restarting sshd and ftpd. I've tried rebooting the machine. I've tried google, but this problem seems to need a little more specific trouble shooting. I can get in through console access, but that doesn't help me much when I need to transfer files.
I use Ubuntu Lucid and use the terminal to access my virtual server (GoDaddy - Red Hat Fedora Core 6). Using the terminal and entering SSH [account name]@IP gets me there. I can manipulate the server then.
But how do I transfer files to/from the Ubuntu terminal to the Fedora server? I want to (using Evolution) email a file on the server to someone.
I'm carrying out a project for my university (CIT in Cork, Ireland) and I'm using CentOS running over WMware. I have a server and a client. The server has no GUI (command line UI) while the client has a UI. I need to install a Simple Forum Machine application and I'm told to FTP the files into the server. I figured out that the best option is to load the files in the client via the GUI and then ftp them in the server. How do I transfer the files from a the client t o the server using FTP? I'm totally new to Linux so the more details the better. Also I'm trying to mount a USB key on the server but have had no luck.
I have login of two ftp servers and I want to transfer the data from one ftp server to another ftp server. How can I do that, without downloading to local and then upload to other ftp?
I need to create a script that will compare the differences between two folders and then to copy only the updated and new files only to another directory. I know I need to use rsync here, I can write scripts so really it not how to create a script it is how do I accomplish the transfer of only new or changes files between two folders to a new file. Do I need to link these two folders first and then use the "--compare-dest" switch.
setting up a basic apache webpage that would allow users to view files and folders on a server and transfer it from one directory to another? I know it doesn't sound very secure but this is an internal server so security is not a big issue as it is pretty secured in its network already. We have users that need to move files from one directory to another and we'd like to give them some intuitive interface to do this rather than having them log in to the linux system and run mv/cp commands. They are not very linux saavy.
i am new in tcp/ip.i want to write a program using c for file transfer where FTP client and FTP server will be used.and also this program should work for ipv4 as well as ipv6.and muiltple client can be connect simultaneously.i dont know how to start program.should i use shell script or socket programming for file transfer?can we use FTP client and FTP server in socket programming?
I'm trying to transfer a large .tgz file from a CentOS dedicated server to a linux webhost (unknown OS). The problem is the webhost will not allow a 1.1gb file to be uploaded, however it will allow the upload in 149MB chunks. I used the split command to segment my tgz into 7 segments under 150mb. I then uploaded all segments via FTP which worked. Then I tried to join the segments to create the original tgz. The join appears to work with no issues. However, when I try to extract the tgz it appears there is a problem, most, but not all files are extracted and there is this error message:
Code: gzip: stdin: Input/output error tar: Unexpected EOF in archive tar: Unexpected EOF in archive tar: Error is not recoverable: exiting now It appears the join did not work and the tgz is slightly corrupt. What am I doing wrong? Here's the commands I'm using:
1. Create the original tgz on the dedicated server Code: tar -czf mysite.tgz ./myfolder 2. Split the tgz into segments Code: split -b 149m -d mysite.tgz seg
# using the -d switch so the segment files use a numerical suffix # I now have these files: seg00 seg01 seg02 seg03 seg04 seg05 seg06 seg07
3. Transfer segments to the other webhost using FTP Code: # hand typing (not a script) ftp ftp.mysite.com myusername mypassword binary cd somefolder put seg00 put seg01 put seg02 # through to seg07
4. Join up the segments on the new webhost Code: # this is in a .sh script file cd /full/path/to/somefolder cat seg* > mysite.tgz 5. Extract the new tgz Code: # this is in a .sh script file cd /full/path/to/somefolder tar -xzf mysite.tgz # the above error is now thrown.
That's it. What am I doing wrong that's causing the above error?
if connecting to my server for file transfer using gFtp is secure. I told gFtp to connect to the server using SSH2 and it works. It says it uses this command "ssh -e none -l wordpress -p 1883 IPADDRESS -s sftp." Is this more or less secure then using ftpes or ftps? What I thought was weird was that I could shutdown vsftpd and still connect. Does SSH2 SFTP use its own ftp server?
I have two questions. 1- How can I set up FTP server for the first time on the Centos? 2-I want to give the ftp user full root access in the directory of /var/www/html so he would be able to upload or download files and folders without getting "FTP Critical file transfer error". From command prompt how can I give the user test root access in the /var/www/html with all the folders, sub folders and files in one shut?
I have a ubuntu server 10.04 with LAMP installed. I also have ubuntu 10.10 on a laptop and can copy files to the server fine. To keep my website uptodate, I usually use Filezilla without any problems. I have just installed Fedora 14 on an old desktop and set up "my stall" ok. The problem is that I cannot copy any files from Ferdoa to the server due to:-
Response: 550 Permission denied. Error: Critical file transfer error I have tried to change the directory on the server "/var/www" using chmod -R 775* and chmod -R 777*, but it makes no difference, the file transfer still fails.
I have a centos server installation running, and have installed and configured vsftpd. FileZilla works great. I am able to connect and transfer files both ways. I used this just for testing purposes.
What I need to do is get Fling File Transfer working. I can connect to vsftpd with Fling, but that is as far as it goes.Sep 20 11:18:44 ftp vsftpd[28286]: warning: can't get client address: Socket operation on non-socketSep 20 11:31:03 ftp avahi-daemon[2240]:
I'm trying to fully automate a Minecraft server, and I decided to use Ubuntu 11.04 on this computer. The computer I'm using for it is an old computer (it's the old family PC, it's about 5 years old (and running amazingly now that I've formatted the hard drive (compared to how it used to run))) so it won't run the Unix environment (or something like that) and runs the Gnome environment instead.
Anyways; the server auto-starts when my computer turns on (my computer also automatically turns on). Now all I need to do is have my computer auto-shutdown at 12:00AM as well as enter the command "stop" into the running terminal. I plan to later add more commands (such as automatically welcoming people to the server when they connect, but that's not important right now and I'll likely be able to figure out if I can get this working).
This is the command I'm using to shutdown the computer:
sudo shutdown -P 24:00
However, it doesn't work. First of all, I need to enter my password, so I need some way to have the shell file enter my password for me whenever the terminal asks for it. Second of all, it just plan doesn't work, even if I enter my password (like, the countdown doesn't appear). As for entering commands into the terminal, the only way I can think of is:echo "stop" But the problem is that doesn't work because it wont enter into the running terminal (and I don't know how to do that).
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
Code:
auto lo iface lo inet loopback # The primary network interface #auto eth0
[code]....
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
I have a weird performance issue with a centos 5 running a nfs server and a rh8 client. I think the fact that it is rh8 client should be downplayed. It is just that with rh8 client the performance degradation seems more clear. See test details below OS in server is Centos 5 x86_64 kernel 2.6.18-92.1.22.el5
1Gb connection between machines File to test over NFS is a 1GB file. First of all I wanted to measure how the network alone performs while using NFS. So in the server side I run a "cat" command on the 1GB file to /dev/null. Please note that the disk read speed is about 98MBs. At this point the file system has the 1GB file cached in memory. In the client side a "cat" on the same file gives me a speed of about 113MBs. It seems then that the bottleneck in this instance is the network and it is very close to nominal speed. So the network performance is really good. (BTW I know that the server got that file from cache because a vmstat or iostat shows no disk activity.)
The second test is reading from disk with no caching involve. In the server I flushed the 1GB file from the memory. For instance by reading another 5GB file and I repeat the same thing as above in the client (a cat on the 1GB file). Now, the server has to go to disk.(vmstat or iostat shows the disk activity). However the performance, now, is about 20MBs, I was expecting something closer so 90MBs. (since the reading speed in the server in the first test showed 98MBs).
This second test was repeated for ext2, ext3, xfs with no significant differences. A similar test using a RH8 NFS server and client gets me close to 60MBs for a 1GB file not cache by the file system in the serverSince network speeds and disk read speeds are not the bottlenecks ... what or where is the limiting factor then?
I want to make a script that specifies how much ram to launch minecraft server with, so far I have:Code:read $ra java -Xmx$raM -Xms$raM -jar minecraft_server.jar noguibut I knowthat that won't work because there is the M at the end of the variable $ra, is there a way I could make this work? I believe that the M at the end is required to specify the measurement unit.
I have configured openssh 5.8p2 with centos 5.6. My sftp is working fine with chroot environment but i am having problem with SCP. I am dealing with muliti Redhat servers. When i try to transfer data from other linux server through scp it gives connection refused. For e.g ssh 5.8 is configured on new server and i want to transfer files from old server which is using openssh 4.3 version.i created same username and password on new server as on old server.My sftp users on new server has no shell access but only sftp access. When i try to scp from old server to new server it gives error connection refused. Is the below configuration only for sftp and can't scp? According to google the configurations i found are for scp and sftp. Do i need to generate ssh keys by giving users on new server shell access, once created then stop shell access again, as i dont want to give shell access permanent for security reasons? but i want to use ssh keys for more security as well.
Port 22 PermitRootLogin no 1.override default of no subsystems[code].....
I run a webserver with centos 5 and like to change hardware. I run a 1u supermicro 6014t server with 4 *500gb raid 6 and like to downgrade to a smaler but more efficient server. The problem is this. I'dd like to transfer all the content and the whole OS to the new system, but how do I do that
is it normal having several security updates week after week in a 10.04 ubuntu lts server distro? Some of them even need a system restart, which I consider truly bad for a web server...