Software :: Best Free Software To Securely Send Large File To Friend Over The Net
Mar 10, 2010
I am looking for good free software to send large files to a friend. I have tried Skype, but it was too slow, so I am looking for something that basically only uses the two PCs, some software, and no third party or server. We can figure out our IPs, if that would be a step to eliminating a 3rd party or service.
I have searched the internet for terms like P2P, private P2P, F2F, SFTP, etc., and I'm just more confused! What's free and rocks? It's not for music or multiple users, just for 2 users and big documents.
Software Requirements:
* Secure
* Free or open source
* Both Linux (for me) and Windows versions (for my friend)
* Good GUI, easy to use (no command line)
I need to send large files from a Linux machine to another using cryptography. The sender machine knows the recipient IP but not vice-versa. I don't need strong cryptography and prefer higher-speed less-secure solutions.
There are no problems with presharing crypto keys but I'd prefer not dealing with SSH users creation.
I think to HTTP PUT over TLS, but I never had experience with it and I prefer to hear which are the possible solutions. I know that it can listen as a daemon but I don't know anything about cryptography. So pipeing with OpenSSL may be a solution.
This one being Ubuntu 9.10 (yes, I know I really should upgrade). I keep a number of confidential files in a TrueCrypt container which is a standalone file in my Documents folder. I'd like to delete some of these, but I want to do it as securely as I can, but I believe if I simply hit 'Delete' with the file selected it'll move the file to the Deleted Items folder. This, I assume, means that the file is taken out of the encrypted volume and stored unencrypted in the Deleted folder.
I've been reading a little about the Shred command, and there seems to be some question about whether it works effectively with a journalled file system; and since I have no idea whether I'm using a journalled file system, or how to find out, I'm treating Shred and other over-writing secure deletion tools as ineffective for now.
With this in mind, can anyone advise me how I can protect the file stored in the TrueCrypt volume, and delete it in place, without taking it out of the encrypted area? And, further to that, can anyone tell me whether in fact the file is actually secured while it's in the encrypted volume? For all I know, just opening the volume may result in copies being made somewhere (apart from RAM).
I have a 58 mg PDF that needs to be sent to a person with Windows. I do not know what special software they have. I only know that I nust send them the file in a way that they can use it. I tried yousendit.com and it tells me I need to install their "yousendit express package". But apperantly it will not run on Linux.
create a large buffer in my program so that I can send, and then receive on another pc, more than 4096 bytes (this seems to be the default for the serial port). I thought this might work:
" > logfile.txt : gives an error extra character after the "
2- logsave logfile.txt 'send "show command;
" ': error invalid command
3- i simply tried to send the output of the whole script to file logsave /home/logfile ./script : seems that logsave work under root only
4- ./script > logfile : the problem with this is that the output of echo or (read "enter your id") command will not be displayed on the screen (actually nothing will be displayed, i have to open the log file to see the output). is there any way to save the log of the "send" ? or to save the log of the complete script without hiding the output on the screen?
I am using RHEL 5.I have a very large test file which cannot be opened in vi.The content of the file has some 8000 lines.I need to view ten lines between 5680 to 5690.How can i view these particular lines in a large file.what is command and option i need to use.
I need to copy over my friend's asp website. I just installed the mono module.But I have no idea how to use it. I found a website that directed me to do the following within httpd.conf:
I've tried using several programs to get this thing working but i've had no luck Im trying to set up a computer for a friend in spain, she has a spanish vodafone mobile broadband dongle so i thought i would test with my uk dongle assuming that the devices are going to be the same/similar.
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 2.6.29.5-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
I've got a large .tar.gz file that I am trying to extract, I have had a look around at the problem and seems other people have had it, but I've tried their solutions and they haven't worked.The command I am using is:
I'm trying to copy a 6Gb file across from my laptop to an external usb drive but it quits at about 4.2Gb every time with a "file size limit exceeded" error. i have checked the output of ulimit -a and there is no limit there on the file size. I'm using the Slax live Cd for this as it always gets the job done
I am trying to copy a file from a network resource on my Local Area Network, which is about 4.5 GB. I copy this file through GNOME copying utilities by first going to Places --> Network and then selecting the Windows Share on another computer on my network. I open it and start copying the file to my FAT32 drive by Ctrl+C and Ctrl+V. It copies well up-to 4 GB and then it hangs.
After trying it almost half a dozen times I got really annoyed and left it hung and went to bed. Next morning when I checked a message box saying "file too large to write" has appeared.
I am very annoyed. I desperately need that file. It's an ISO image and it is not damaged at all, it copies well to any windows system. Also, I have sufficient space on the drive in which I am trying to copy that file.
I have been having a recurring problem backing up my filesystem with tar, using bzip2 compression. Once the file reached a size of 4Gb, an error message appeared saying that the file was too large (I closed the terminal so do not have the exact message. Is there a way to retrieve it?). I was under the impression that bzip2 can support pretty much any size of file. It's rather strange: I have backed up files of about 4.5Gb before without trouble.
At the same time, I have had this problem before, and it's definitely not a memory problem: I am backing up onto a 100G external hard drive.
That reminds me, in fact, (I hadn't thought of this) that one time I tried to move an archived backup of about 4.5Gb to an external (it may have been the same one) and it said that the file was too large. Could it be that there is a maximum size of file I can transfer to the external in one go? Before I forget, I have ubuntu Karmic and my bzip2 version is 1.0.5 (and tar 1.22, though maybe this is superfluous information?)
I am attempting to burn the ISO for Lucid Lynx final onto a 700MB CD. The ISO file is 699MB, but Windows reports that the size on disk is 733MB and thus CD Burner XP refuses to burn the file, stating that it's too large for a CD.
Why this discrepancy on file sizes? I've noticed this with other files as well, suddenly it's a bit of a problem, as you can see!
I recently built a home media server and decided on Ubuntu 10.04. Everything is running well except when I try to transfer my media collection from other PCs where it's backed up to the new machine. Here's my build and various situations:
Intel D945GSEJT w/ Atom N270 CPU 2GB DDR2 SO-DIMM (this board uses laptop chipset) External 60W AC adapter in lieu of internal PSU 133x CompactFlash -> IDE adapter for OS installation 2(x) Samsung EcoGreen 5400rpm 1.5TB HDDs formatted for Ext4
Situation 1: Transferring 200+GB of files from an old P4-based system over gigabit LAN. Files transferred at 20MBps (megabytes, so there's no confusion). Took all night but the files got there with no problem. I thought the speed was a little slow, but didn't know what to expect from this new, low-power machine.
Situation 2: Transferring ~500GB of videos from a modern gaming rig (i7, 6GB of RAM, running Windows7, etc etc). These files transfer at 70MBps. I was quite impressed with the speed, but after about 30-45 minutes I came back to find that Ubuntu had hung completely.
I try again. Same thing. Ubuntu hangs after a few minutes of transferring at this speed. It seems completely random. I've taken to transferring a few folders at a time (10GB or so) and so far it has hung once and been fine the other three times.Now, I have my network MTU set from automatic to 9000. Could this cause Ubuntu to hang like this? When I say hang I mean it freezes completely requiring a reboot. The cursor stops blinking in a text field, the mouse is no longer responsive, etc.
I'm trying to create an Ubuntu Server file server that will handle large file transfers (up to 50gb) from the LAN with Windows clients. We've been using a Windows server on our LAN on the file transfers will occasionally fail... though the server is used for other services as well.
The files will be up to 50gb. My thoughts are to create a VLAN (or separate physical switch) to ensure maximum bandwidth. Ubuntu server will be 64bit with 4tb of storage in a RAID 5 config.
i don't know if this is a slackware related issue but i have the following problem.I'm running a slackware64-current on my system. For my private data I'm using a QNAP NAS (Some ARM CPU with linux kernel 2.6.22), the file shares provided by NFS. I mount them withmount -t nfs 192.168.0.2:/Public /mnt/qnapWorks fine, no problems.But now, if i try to copy some large files ( > 1GiB) to the NAS share, sometimes the systems completely freezes during the copy process. I have to do a hard reset to bring the system back to work
I am trying to copy a 7.3gb .iso file to an 8gb USB stick and I get the following error when it hits 4.0gb
Error while copying "xxxxxx.iso". There was an error copying the file into /media/6262-FDBB. Error splicing file: File too large The file is to be used by a windows user, and I'm just trying to do a simple copy, not a burn to USB or anything fancy. Using 10.4.1.LTS, AMD Dual Core, all latest patches.
I am new to shell scripting.What i am trying is to write a shell script which take the input file and output should like as mentioned below.Output file should have data till SOK (marked in red)from every second line and then the selected data(marked in green) from 4th line.So selected data from 2nd and 4th line in one line of O/P file and then similarly selected data from 6th and 8th line in second line of O/P file.Input File: