Security :: Connect To Another Server (B) To Transfer Data Everyday?
Apr 13, 2010
I have a server A that needs to connect to another server (B) to transfer data everyday.[A] ==SFTP==> [B]
I am using SFTP for the data transfert between A and B. I configured B to allow authentication only with a key, not with password. However, anybody who acccess the filesystem of A, could steal the password.
So I thought I could password protect the private key from A. But in such a case, I need to store the password somewhere on A, so the server A can access the private key to connect to B. Finally, it is endless: i always have to store somewhere a secret on A. Is there another solution that allow to have an authentication between A and B without storing plain text secret on the server A ?
I'm looking for a most possible, secure solution to transfer data using rsync over Internet between 2 linux server. I have 3 option: SSH, IPSEC and Kerberos. Which one in your opinion should be most secure solution?
I have configured openssh 5.8p2 with centos 5.6. My sftp is working fine with chroot environment but i am having problem with SCP. I am dealing with muliti Redhat servers. When i try to transfer data from other linux server through scp it gives connection refused. For e.g ssh 5.8 is configured on new server and i want to transfer files from old server which is using openssh 4.3 version.i created same username and password on new server as on old server.My sftp users on new server has no shell access but only sftp access. When i try to scp from old server to new server it gives error connection refused. Is the below configuration only for sftp and can't scp? According to google the configurations i found are for scp and sftp. Do i need to generate ssh keys by giving users on new server shell access, once created then stop shell access again, as i dont want to give shell access permanent for security reasons? but i want to use ssh keys for more security as well.
Port 22 PermitRootLogin no 1.override default of no subsystems[code].....
I have a server that I wanted to transfer it to a newer one both of them have CentOS but the newer one kernel is more up to date I wanted to know is it possible just to copy some directory contents exactly to another for transferring the server data (for example /var /usr /bin /home /etc). I have one website on my server with its mysql database
I believe that the attacker somehow got in through the ssh daemon(OpenSSH 5.3p1) on June 12. From here, a user account named "crond" was created(can anyone confirm weather this is normal?) and according to the log, this account was accessed several times between Jun 12 and Jun 18 from the same ip address. Also on Jun 12, the MOTD on the ssh server was messed up and remained that way until it was reinstalled. The default ssh client(OpenSSH 5.3p1) was made completely non-functional.
I became alerted to the problem when my ISP advised me to run a virus scan on the machines on my network. Not knowing of any linux based anti-virus software, I decided to check for suspicious files on my hard drive. I found one, in the /tmp directory was a subdirectory called ".popscan". Inside was a script and a list of about 40 very default sounding usernames and passwords. There was also a file called "back.txt" in the root of my filesystem. Which is a pearl script that aparently spawns a shell.
At this point I disconnected the server from the internet and mirrored the drive. I found a suspicious home directory for "crond" I'm hesitating on setting up the server again for fear that it might just get rooted again. I would also like to find out how he got in so it can be prevented for other people aswell.
I am using centos server, i delete and touch maillog file in /var/log/maillog but every other days it gets about 3 - 4GB please help how can i stop this
log are like this
Jun 27 04:23:09 localhost postfix/smtpd: warning: not enough free space in mail queue: 0 bytes < 1.5*message size limit Jun 27 04:23:10 localhost postfix/smtpd: lost connection after MAIL from unknown[220.127.116.11] Jun 27 04:23:10 localhost postfix/cleanup: 3F8E110011: message-id=<20100627032310.3F8E110011@localhost.bgssuk.com> Jun 27 04:23:10 localhost postfix/cleanup: warning: 3F8E110011: write queue file: No space left on device
I'm using comcast, and I setup my server, everything works fine.But later I found out that the server inet address is changing everyday, and it makes my server almost not accessible.Is there anyway to fix that problem??
Recently I had a situation where I had to transfer data from a PC to a Mac. Usually we simply convert the data as necessary, ie. bookmarks, contacts, etc. move the info to a dropbox account and use Windows networking to access the dropbox from a Mac. Well, there was a network issue and it was suggested that I boot to Ubuntu and do this. We do have Ubuntu on a flash drive but I was lost from there. Can anyone point me in the direction of documentation or assist me in understanding how this would be accomplished? I have a Mac, PC and Linux box at home so hopefully I have all the necessary tools to test this out.
I am quite newbie with Linux networking and I want to know if there exist a log in Linux where errors during transfers are stored. I mean, when I'm transferring data between two hosts is there any way to log somewhere (where?) the errors or warning during the transfer? That is, connection failures, TCP or UDP errors, etc.? Is there any "log level" to select the errors to be displayed in the log not to work with a huge log file?
I'm going to be needing to copy over 2 terabytes from one drive to another in a couple of days and I need something that I can use to transfer the files with CRC checking. Something comparable to Teracopy in Windows.
Open Source is an absolute must. I'm running on Debian and getting away from Windows to get away from all of the proprietary software. If it's going to be proprietary I might aswell just boot back into Windows as far as I'm concerned.
Some of the features that would be nice but not needed
-GUI (preferably gtk) -Pause/Resume (it's going to be a long transfer so it would be nice.)
Those would be nice but CRC checking is an absolute must have for so much data.
How can I transfer data from one user account to the other? I wish to delete one of the accounts but before that I want to transfer all the data in that account to the other account so that my data is saved.
I am running ubuntu 10.4 lts desktop and ubuntu 9.10 server with the gui active. On both of these machines the data transfer rates over usb and lan have slowed down quite considerably. Also over multiple devices, not the same hardware everytime I try to transfer something.
For the past few days, i have been trying to fix a common error related to mtp devices such as creative zen. Because of the new updated libmtp8 library, it is impossible to transfer files into any mtp devices. However there is a fix that will remove this issue.
1. Get the older karmic version of libmtp8 from here: URL... 2. Install this package from terminal using: sudo dpkg -i libmtp8*.deb 3. Lock libmtp8 from the synaptic package manager, so it doesn't get updated. 4. Uninstall and then downgrade any mtp packages (through packages.ubuntu.com) that become broken after step 2 to the karmic version. 5. Enjoy.
i have got a laptop for the first time that i'll be taking to college. I have a lot of data on my PC which i want to transfer to my laptop. My computer has Lucid 32bit while laptop has 64 bit installed,Data is too much and transfering via usb drives will take a lot of time.. I do not know how to set up a network.
I need make the copy of my images system on the second PC. But I can`t transfer its data. It seems that the way to images is stored absolutly, for example /home/john/images/holiday. If the second user`s name is Marry, the way will be /home/marry/images/holiday and Shotwell don`t accept this.
I have a number of queries but will start with one. I presently operate 11.04 Classic (no effects). OS is NOT set up as dual boot. As I am considering purchasing a new Desktop, I am wondering what is the best way to recover such things as folders, Firefox bookmarks etc. for four users.
I do have a external HDD which I guess could be used for a possible backup if that is the way to go! The new computer will probably have Windows 7 installed with which I assume I would probably dual boot.
I have installed Xubuntu 11.04. I want to see where when data is transfered by XUbuntu(through internet). In task manager i didn't find any such thing. In fedora there is system monitor where networking information is given to user where is such thing in XUbuntu. My main reason for asking such thing is that i have limited internet connection i don't want that Xubuntu update itself or any other such thing. you may also tell me how to stop updates of Xubuntu 11.04.
i am using NCTUns simulator and having problem while receiving message from an OBU. sending a message works properly but recving creates problem and fails. i am using exactly the same function as used by the developers in the demos but still no luck. i copy the code attch the screen shots to have a look
sendto() and recvfrom() are used for message transfer. they both return >-1 if they are executed successfully. please have a look in the screenshots. agentClientReportStatus is the built in packet format which im using here whose fields i filled manually are in the code below.
agentClientReportStatus *message,*mssg; int remainTime,n,n2,n3,i,sendingaddress,value; sockaddr_in cli_addr; timeval now;
I am doing a project where 2 clients connect to server and communicate (chat) and transfer data one after other using sockets. I have working code for this in C language. Now our main aim is to create a communication link where two clients transfer multiple streams data parallely. To be more precise i want to transfer images files and audio files parallel at same time, so is it possible to send data parallel using one socket connection?
I am using rhel4 with kernel 2.6.9-55 ELsmp. I have recently purchased 250gb seagate portable usb HDD. I have already mounted the usb HDD by updating Kernel with mode 755 and root user. I can transfer file from usb HDD to rhel system. But I am unable to make directory or copy file from rhel4 file system to usb HDD with ntfs. It is giving error message of permission denied though the device having write permission. I have already installed ntfsprog-2.0.0 in root. But there is no improvement. Is there any thing I can do to transfer data from linux system to ntfs.
For example I am copying data to USB-flash drive using some file manager. When the file manager shows that transfer is complete, flash drive indicator continues to light. As far as I know this is some kind of caching system...
1. Is it OK to close file manager when transfer window is closed but the flash drive indicator continues to light (data is still being copied)?
2. Is it better if I turn off this caching technology?
I build a new test machine where i need to bring data from live machine. the data is kind of flat files and some propreitary application axigen mail server.
now what i am suffering from, which commands to do first practise. there is remote site with 1mps speed of wireless between live and test machine. on daily basis aprox 14gb .tar.gz files it need to move it.
i found scp,rcp,rsync,sftp etc. which is fastest way to replicate or copy to remote machines.
the data is on live machine /var/opt/application and on remote same directory too /var/opt/application
i try using scp it take aprox 8-10 hours to copy single 14gb file.
if possible where to see such commands logs results, if anything get down error discontinue while copying.
I have a FC13 box that has both Gnome and KDE sessions installed.
I have noticed on the KDE session that data transfer rates are slower than when I use Gnome.
In Gnome, I can transfer files between my FC13 machine and my Ubuntu 10.04 pc at a rate of 6.5 MB/s (52 Mb/s if my maths is correct), but in KDE the rate is only 3.5 MB/s (28 Mb/s).
"ethtool eth0" shows my NIC speed as 100 Mb/s. Obviously I am not hitting anywhere near that speed in either session, (a separate article may be happen in the future to address that), but I am curious as to why KDE is that much slower for file transfer.