Right, just a quick question about rsnapshot over sshfs and encfs. I've set up an encfs filesystem, and when mounted on the remote machine remotely:
Code:
touch foo.bar
Code:
cp -al foo.bar foo.car
Works as one would expect it to.
The same is true on the local machine (The EncFS has External IV chaining disabled). However, when the remote dir is sshfs mounted on my computer here, and then encfs'd to a decrypt mount on my computer, I can move files to it, and they go over the network and get encrypted, however:
Code:
cp -al <file> <file>
No longer works, I get 'not implemented' errors...
I thought since I don't have External IV chaining this shouldn't be an issue - I've tried without any of the file chaining options, again to no effect. All work remotely, or with both locally, but not over sshfs. Is this a quirk of sshfs?
I have an NFS share hosted at a file server for several machines. I set up an encfs encrypted file tree in this. First, I created a directory in the NFS mounted tree where I wanted the encrypted files to be store (/home/nfs/phil/private). Second, I created a mount point where I wanted to access those files in the clear view (/home/phil/nfs-phil-private). Third I mounted encfs with the simple command "encfs /home/nfs/phil/private /phil/nfs-phil-private". During this mounting, it asked me for a pass phrase to encrypt the files with. Fourth, I copied some files into "/phil/nfs-phil-private". I saw that files with cryptic names were created in "/home/nfs/phil/private", along with a file named ".encfs6.xml".
That was on one machine named "lorentz". Then I switched to another machine named "euler". I created the same mount point here (/home/phil/nfs-phil-private). I verified that /home/nfs/phil/private already existed, as did "/home/nfs/phil/private.encfs6.xml". So I tried the same "encfs /home/nfs/phil/private /phil/nfs-phil-private" command. This time it failed. Here is all the output up to the first prompt:
Code:
15:05:23 (FileUtils.cpp:375) Archive exception: stream error 15:05:23 (FileUtils.cpp:326) Found config file /home/nfs/phil/private/.encfs6.xml, but failed to load Creating new encrypted volume.
[code]....
The first two lines certainly appear to be some kind of error. I can cat the .encfs6.xml files just fine, so I do have permission to read it. It had not even prompted me for a password, yet. Anyone know what the deadl with this is? A possible cause is that the first encfs is version 1.6.1 (ubuntu 10.10 packaged as 1.6.1-1) and the second encfs is version 1.5.2 (ubuntu 9.10 packaged as 1.5.2-1).
I just did my first rsnapshot backup of my /home/ to an external harddisk. When I am not at my computer for a couple of hours, I always shut it down. Therefore, there are no predictable hours of the day where I know that my computer is running. So, how should I schedule/crontab my rotating rsnapshot backups?
Is anyone using rsnapshot in combination with a schedule which is not based on exact times but rather on the time the computer is running?
I have a server with a postgres database, apache and a custom java application.
I am trying to run rsnapshot to backup /home /etc and /var folders.
But I am running into issues with rsnapshot and permissions. More specifically these kind of errors,
Code:
I look at the permissions on these files with ls -la, I get
Code:
The owner of the files is root and postgres users. I am using passwordless login to connect to server as user XYZ. XYZ has root access to the server and to the database.
These files are all over the place. Some in /etc and some in /var/lib for instance. How can I best copy these remaining files.
I am trying to run rsnapshot from cron via root's crontab file (crontab -e). If I run rsnapshot from the command line with sudo it works perfectly, however, if I run it from cron:
Code: * * * * * /usr/bin/rsnapshot hourly >/tmp/crontab.out 2>/tmp/crontab.err This does not work. The crontab.err file shows:
I am backing up my debian server with rsnapshot which actually uses rsync to perform the backup. The backups are located in an external storage of size 1.4T .
[code]....
I tried to understand what this error message means and i founde that error code 12 : 12 Error in rsync protocol data stream I understand that when rsync find that a file on the target was changed , it will send only the block/blocks that contain the changes and in the destination rsync will create new file and not update the old one (new inod...) . I want to know if this error i get is due to full disk or perhaps it is some other factor
Rsnapshot is a software written in Perl to make backup of local and remote file system. The well proven rsync is behind this utility. rsnapshot does not need root user intervention to restore the data of a normal user. It does not take much space in your Backup server. It can be easily automated (scheduled) to make life easier. Just setup once and forget it configuration. Basically it takes snapshot of file system (or a part of) in regular interval such as hourly, daily, weekly and monthly.
This can be configured easily through a simple text based configuration file. The above task can be setup in a few easy steps in a few minutes. Two major tasks are configuring rsnapshot and openssh automatic login. To make the backup automatically, we need to automate the remote login in a secured way. This can be done through openssh tools. This scenario depicts backup of desktop (assuming that IP address is 192.168.0.100) data to a backup server. My desktop runs on Ubuntu 10.04 and backup server runs on Debian Squeeze. [URL]
I have a machine on my network and that machine is a mass storage server that I will eventually use as a media server (to stream movies, videoclips and music on my home theater system). I use slackware 13 on ALL of my machines.
I am trying to automate the backup of the "/home" folder of my laptop onto the mass storage server. I currently use rsnapshot and it works great, but I would like to automate the whole process, even if I am not home or in front of my machines...
Here's what I imagined (in pseudo code):
1) Poll if server is active (up); 1.1) If not: 1.1.1) Wake up the server (WOL); 1.1.2) Wait for the server to boot; 1.1.3) Confirm the server has made it to the login prompt (normal boot); 1.1.3.1) If not, send an alarm via email;
I could not find details of what CryptKeeper was doing and I worked this out. It shows how to open and close CryptKeeper files using encfs form the command line. I hope this helps others.
Ubuntu karmic 9.10. CryptKeeper 0.9.4-1 encfs 0.5.2-1ubuntu1 also works in Mint8. Tom Morton author of CryptKeeper site: [url]
How Gnome Cryptkeeper works with encfs
In CryptKeeper create a new encrypted folder:
The directory above is created and also another hidden one called: /home/ian/.aaaaaaxxxxTestCryptKeeper_encfs which contains one hidden file called .encfs6.xml. As you create additional folder and files in the /home/ian/aaaaaaxxxxTestCryptKeeper additional folders and files with encrypted names are created in /home/ian/aaaaaaxxxxTestCryptKeeper 4L9KBI4IeoAKOoZ,IwzVyn2VPGysXt-JCbStUej5Ewnn90. These mirror any files and folders which you create in the encrypted directory except that there names and contents are totally encrypted.
The above CryptKeeper directory can be created anywhere within the Linux file system, for example, on another partition. In each case two directories are created within the parent (in this example /home/ian/), one with the original directory name, the other preceeded with a "." and followed by "_encfs".
How to open a directory created with CryptKeeper using encfs.
Provided you copy the directory like .aaaaaaxxxxTestCryptKeeper_encfs and all its contents, it can be opened anywhere using the following command. (Note that full path names are needed.)
If /home/ian/.aaaaaaxxxxTestCryptKeeper_encfs does not exist you will asked if you wish to create it and you will be asked for a password twice. In this case it will not be in CryptKeeper unless you then import it.)
If it is a CryptKeeper file then it appears in CryptKeeper file list as opened and can be closed from there. To close from the command line type:
Set up a few machines yesterday to test out some parallel code. Just for fun, I selected the "encrypt users files" option when setting up Ubuntu (10.10). I had never used the option in years past. Now I'm finding it a pain. EG., ssh requires me to already have a login to the machine before it will let me log in w/o a password (eg., using id_rsa.pub and authorized_keys).
Similarly, I have no reason to encrypt files on these machines. They're just crunching numbers. Is there an easy way to disable this? Or do I need to delete my original user and make another one (with all the su privelages, etc...) w/o an encrypted file system / home directory.
my os is opensuse 11.4. I tried k-encfs, but failed. Running the .rpm file said successfully installed, but I cant find the program and running the 'install' script gives me another error message.
I am trying to get Encfs working on Ubuntu 10.10 with only partial success. I am using the Ubuntu package which is version 1.6.1. I am also trying to build 1.7.4 source on Ubuntu 10.10 which is failing.
First the problem with the Ubuntu package, which I realize may be fixed in 1.7.4. I am mounting a clear directory with the --reverse option to have an encrypted view of this data. This so far works, although I do not know if it really works correctly. I used rsync to copy all the encrypted data to a third directory outside of this first mounting. Then I do a second mounting (without --reverse) using that copy as the source, to make a mountpoint with a clear view of the copied encrypted files. This fails as no files show up at all.
I am doing it this way because my intended first use for Encfs is to copy an encrypted view of a local physically secured backup directory containing clear data to another remote machine where sometimes it is not physically secure. Transfer is by ssh over rsync, but that is not sufficient security for the remote machine. So the role of Encfs is to be sure the data is never in a clear state on that machine when the machine is not attended. This location is the home of the owner of the company who is not always at home. The machine is, in theory, at risk for theft when no one is at home (this is the risk we want to address). The owner will personally have the Encfs password, and may need access to some of these files. So it would be treated as an encrypted store and Encfs would be used to view it in the clear by manually mounting it that way (e.g. not with --reverse).
I am doing the test entirely on my desktop at the moment, as described above. I am using a script to carry out the entire setup of my tests, so it is fully reproducible, and that configuration can be incrementally changed as desired. I have a suspicion that certain messages resulting from the setup may indicate the problem. This is from the first mount with --reverse:
Code: Creating new encrypted volume. Standard configuration selected. --reverse specified, not using unique/chained IV Configuration finished. The filesystem to be created has the following properties:
Whenever I mount a encfs directory to a regular directory, the regular directory disappears. this is the command I use encfs ~/encrypted ~/plain When I try to access the folder from my windows computer, I can not see it. What to do?
I have two computers on my network, both are running Ubuntu 10.10. I wish to access encfs-encrypted directories on a remote computer from my local computer. I used nfs to mount the remote encrypted directory onto my local machine, and then I used encfs to decrypt. But because of nfs' use of some UID-type ownership convention rather than user:group, I have no access to the directory I just mounted.I want my local machine's software to access the files, so ssh login is probably not a solution, and I would like to avoid using encfs' --public option if possible.
Just a warning / question about Encfs on Slackware current. I doesn't work due to the upgrade to boost 1.4.2. I ran encfs on an old install of 13.0 to get at my data, but I'd prefer to access it right from current. A big warning: if you try to access your encrypted data on current it will corrupt your encfs6.xml file and I don't know if it is recoverable (I had a backup of mine).
I've been trying to share a folder with samba. This folder is the decrypted version of an encfs encrypted folder. Mounting the decrypted folder on the server is done automatically on login using gnome-encfs. Exposing the folder locally works like a charm. Now where I get stuck is trying to access the samba share from a client (even with smbclient on the server itself). I can see the share with smbclient -L:
Now if a user (included in users) creates a new document in the visible folder, that will be
Quote:
-rwxrwx--- 1 root users 0 2010-03-02 14:19 new file
While I would like it to be
Quote:
-rwxrwx--- 1 user users 0 2010-03-02 14:19 new file
Mounting encfs without the option uid='0' gives same results with only difference that instead of root the owner is the user who mounted encfs. Also copying a file owned by different user rather than root goes to the same: for example having in my home a file like
Quote:
-rwxr-x--- 1 me users 0 2010-03-02 14:30 myfile
and trying to copy it to the encrypted shared folder with
Code:
sudo cp -a -v ~/myfile /somewhere/visible
will give something like
Quote:
cp: failed to preserve ownership for `~/myfile': Operation not permitted
And the copied file on the shared encrypted folder will be as usual:
Quote:
-rwxrwx--- 1 root users 0 2010-03-02 14:30 myfile
Is there a way to mount encfs in order to preserve ownership?
Is there a way for my home folder to not be automatically mounted when i log in? And for that matter a way to change the password from my log in password to something else?
There are some encfs folders with private data on the server and all data is exported via nfs to all other omputers in the house.I can mount the encfs folders on another computer (using encfs command) to work with the data, but I never dared to mount it on more than one computer simultaniously, because I fear the encrypted data might get corrupted if more than one computer mount and access it at the same time.
So I want to ask about your experience: Is it safe to mount an encfs folder on several computers at the same time? All computers use "hard" and "sync" as nfs mount options to minimize risks of data loss. But can I access the folders simultaniously, or do I risk corrupting the encfs encryption and lose everything?
I started up my machine this morning and entered my password to encfs as I do each day and was greeted with a message telling me my password was incorrect. I tried several times, checked caps lock but no joy.
The message (which I didn't copy and paste unfortunately) mentioned ssl and I remembered that openssl was one of the security patches I applied at the weekend. So I removepkg'd the two openssl packages (v0.9.8m) and then installpkg'd the original ones that came with slackware 13.0 (v0.9.8k).
I just updated to 10.04 from 9.10 and suddenly gedit is saying I don't have permission to save files in an sshfs-mounted directory. Nothing I've found through Google works.
* I'm mounting using `sshfs james@of1-dev-james:/home/james/projects $HOME/projects`
* `fuse` is listed in /etc/modules
* I'm a member of the `fuse` group
* Using `newgrp fuse` before mounting stops gedit from seeing the mounted directory at all.
* /dev/fuse belongs to `root:fuse` and has `crw-rw-rw-` permission.
* Other apps e.g. `nano` have no problem reading/writing to this directory.
sometimes i edit files in a remote server. i normally mount the remote drive via sshfs and edit a configuration file or two and some text files using gedit. after i upgraded to 10.04 i cannot save the files that i edit anymore. i can rename the file. but the weird thing is that i cannot save the file after i edit it. one of the files that i was editing is crontab.
I use this command to mount sshfs:sshfs -o idmap=user user@ip:/home/user/public_html ~/FolderThen I enter my password. I do this every time I start my computer
I've used wake on lan and SSH on the local network for some time now. I also used SSH to mount a filesystem (SSHFS / sftp, same thing, right?) and I could forward X11, loved it. I used both these options for my convenience. So I decided it was time to open up some ports on my router (Linksys WRT320n running dd-wrt) and try to set up a remote connection. This actually worked after some time, so I'm now able to turn on my home computer from the Internet (school in my case) and then log in to it through SSH. I set this up using other ports then the default ports. Something like this (these are not the actual ports I use, just examples):
port 2112 -> port 9 (for wol, wake on lan) port 2113 -> port 22 (for SSH)
This information might be useful: I set this up using public and private keys. This is necessary for SSHFS to work properly I think and it also makes it more secure. And then I found (and had some presumptions that this was going to happen) that both SSHFS and X11 were not working. I'd rather not open up more ports on the router for security's sake though, so I'm asking for other solutions. And if there really aren't any other solutions then which ports to forward. And if forwarding is really necessarily then how to make the client use port 2114 for SSHFS and 2115 for X11 so I can forward those ports to the default ports.
I mounted a remote directory using sshfs and I can't save files using gedit, while saving same file using vi works. Changin permission to o-r (640) allows gedit to save files OK. Is there a way to change sshfs connection to make gedit work without chmodding every file? (I use -o uid=`id -u` -o gid=`id -g`, so that remote files seem to be owned by me)
Code: $ touch test.txt [!] test.txt appears $ vi test.txt [!] :wq -> saves just FINE
I am mounting a remote directory using sshfs, over VPN. If the VPN connection is lost, the directory obviously can't be read. But, when I try to "ls" in its parent directory, the command just stalls. No error messages, and ctrl-d, ctrl-c, ctrl-z don't do anything. The command I ran to mount the directory was: Code: sshfs -o workaround=rename bt@example.com:/dir1 /dir1