General :: How To Prevent Other Users From Seeing Contents Of Home Directory
Jul 8, 2010
I have a box with multiple users on it and I want everyone to be able to have full access to their home folders, but not be able to see the contents of /home/ or another user's home folder (I.E. bob has full access to /home/bob but cannot access or even see the contents of /home/john)Right now users can see other user's home folders but can't modify what's inside. How do I prevent them from seeing the contents at all?
I've created other users in my machine. now I want to add all my home directory contents and settings to the home directory of other users. how can i do that? Can I do it from /etc/skel directory?
Is there anything special about a home directory before users' home directories are stored there, or is just as typical as any other "empty" folder?Let me just cut to the chase, but please no ear ringing about the folly of messing around as root, particularly with directories at root level. I know it's considered stupidity, but I deleted my home directory.
Is there an easy way to restore a working home directory? I tried copying /etc/skel under root, but I'm not sure what a home directory should look like once it has been restored. Besides . & .., there were .screenrc & .xsession in my home directory when I copied /etc/skel. Are these files suppose to be in "/home" or "/home/~" or both?
I am just coming to GNOME from KDE where I used the folder view desktop widget to display the contents of ~ directory (/home/<user>) rather than the "Desktop" directory itself, as that's where all the stuff I wanted to access from the desktop was. Is there any way I can do this in GNOME with the actual desktop (as opposed to a widget)?
When I booted up this morning the contents of my Home directory are all showing up on my desktop, and there is no single Home folder. How did this change, and how can I change it back so that the Home folder is on my Desktop with the contents inside of *it*?
I installed Fedora 12 x64. Now everytime I start my Linux the .gvfs directory in my /home/Razorblade -dir is corrupted. So I have to reboot and start an Linux LiveCD, mount my home partition and delete this folder. After that I can login normally. Symptoms: I am able to login normally, start a browser, start my mail client, list the contents of subfolders of /home/Razorblade/... - everything fine. But as soon as I want to list the contents of my /home/Razorblade folder - nothing but this turning blue thing around the curser. The command line does nothing after "ll /home/Razorblade", sometimes even crashes and closes. As root I am able to do "ll /home/Razorblade" And this is what I get:
I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge.
The easiest way to do this would seem to be to just change & export $HOME, eg
cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions
However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about?
I have two partitions on my HD partition1 mount point / and partition2 mount point /home. I had ubuntu 11.04 32bit installed and wanted to switch to 64bit so i reinstalled ubuntu and chose the same boot points. Since i reinstalled i had to create a new user and it created a new home folder. Now i want to replace my current users home folder with the previous home folder i had.Would a simple rename work?
how to change when running command "adduser" or "useradd" the placement of the users home directory. Have tried editing the /etc/default/useradd file with no results.
I want it to be placed in /var/www And I would also want to know how more folders and files can be created in the home directory automatically.
I need to add another user besides the one set up during the installation procedure but I also need to limit all users to use only their own /home/user directory.
I have an SFTP server using OpenSSH on a server running Fedora 12. I want to chroot my sftponly users into their home directory but I want to let them have write access to their upload/ folder. Right now users can log in and view & download items, but for some reason I can't get write access to work. Here's some info:
username: testuser group: sftponly from /etc/passwd: testuser:x:501:501::/home/testuser/:/bin/false
I see this questioned asked a lot and figured this tutorialThis tutorial explains how to create an SFTP server which confines (or chroot) users to their own home directory and deny them shell access.
As I regularly move between Mac and PC, I thought it would be a good idea to put all my data on an external drive. As Windows 7 and OS X have similar home folder layouts, I just simply put all the folders I need for both on the root of the external drive and changed a few settings so that the Home folder for my user is on the external drive on both Windows and OS X.
Whilst Ubuntu also has a similar structure, I cannot work out how to have it so that my users home folder is on the external drive. I have done a little research and all I can find is how to have the /home directory on another partition. a) this is not what I'm trying to do, just the folder for my user and b) this would mean formatting the external drive to extX format, which just wouldn't work for me.
I am using 9.10 (or will be once the upgrade is complete)
I am using 10.04 ubuntu server. I configured the ldap server. I configure the client machine to contact the ldap server for authentication. But if i tried to ssh john@localhost, it says could not chdir to home directory /home/john: no such file or directory.
I have to create a script to identify those users who have un-sanctioned (forbidden) files in their home directory. I tried something like this (this is a try and I need some opinions):
Code: #!/bin/bash user_belongs() { if `groups $var1 | grep $var2` then return 0 else return 1 fi } .....
What is the best way to prevent some user run some command? For example every body can run at and batch command and 3 or 4 special users prevent run these command?
Is it possible to have a user in Ubuntu/Debian that does not have access to synaptic, apt-get, dpkg and cannot even download anything from the Web, but has root privileges otherwise?
I would like to copy the contents of a directory into another. I don't want to copy the directory and all files and directories under it, but just the contents of the directory just as if it were a regular file. Doing cp -r target dest copies the directory and the entire hierarchy rooted in it. I get error if I do not include the -r option. (I am calling cp from within a C program.)
Is there a way to copy a directory (retaining the permissions and owners) without copying the contents of the directory?
If there is no such thing... then I need a way to determine if a target path is a file or a directory, and if it is a directory I need to make a new directory elsewhere that has the same name, owner and permissions.
Basically, I'm trying write a script to copy 200 GB of files over a network to a new server, and I'd like to do it by generating a list with the find command. That way, I can migrate large chunks of the files over the course of a week, and on the day of the migration generate a new list of files that changed in the last week and then copy just the chagned files over minimizing the down time. However, the list will contain directories that I can't just use the 'cp' command on because it will copy all the contents of the directory.
This is the script I'm running tar tf some.tar somefolder_insidetar And output it's a list with all folders, files, and SUBDIRECTORY Files, the only thing I need it's just show the contents (folder and files) of the current directory choosed, not listing subdirectory files, or subdirectories inside subdirectories.
I have a secondary disk which holds a /home directory structure from a previous install of Linux. I installed a new version on a new primary drive and mounted this secondary drive as the new /home. Problem is, even though the users are the same names and I can access the home directories for the users, I cannot login directly to their home directories, as I get the following error: -
Code:
login as: [me] [me]@[machine]'s password: Last login: Wed Jan 6 18:34:33 2010 from [machine] Could not chdir to home directory /home/[me]: Permission denied [[me]@[machine] /]$
Now, since the usernames are correct and the users are in the passwd file with the correct home directory paths, could it be user ID's that are different or something else? It's not as though I cannot access the home directories for the users, simply that I cannot log directly into them from a login prompt.
I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory?
I am attempting to use the zip command with the '-x' option to exclude a folder e.g. 'zip upload.zip public_html -x public_html/jquery/*'. However, parts of this folder are still being added to the archive. I made a shell script (saved as 'compress.sh' and ran as '. compress.sh') to do the archiving so I could test adding nested wildards for multiple subfolder levels.
Code:
#!/bin/bash rm -f upload.zip zip -r upload.zip public_html -x public_html/jquery
[code]....
Each new line I added here that has the nested wildcards made the archive file size a bit smaller. Adding more /*'s than this didn't affect the file size. Even after all this though, there were still a couple megabytes of files and folders from the 'jquery' directory that were added to the archive.
Here's some examples of files and folders that were created after I unzipped the archive: public_html/jquery/js/tablesorter/addons/pager/icons [folder] public_html/jquery/js/tablesorter/addons/pager/.svn/entries [file] public_html/jquery/js/tablesorter/build/.svn/text-base/js.jar.svn-base [file]
Why is it that despite all the -x lines, the files and folders like these were still being added to the archive? How can I simply recursively exclude the entire public_html/jquery folder from the archive?
I have, for example, a folder called "MyFolder" and it contains 3 files: MyFile1, MyFile2, MyFile3. The only file that I do NOT want a particular user/group to even see that it exists is, for exmple, MyFile2.So, when they do a directory listing on MyFolder, they should only see MyFile1 and MyFile3. How can this be done in Linux? The important thing is that it is not just preventing them from "executing" MyFile2, but to prevent them from even knowing that it exists by not including it in a directory listing.This is a simpified example using one file, but in reality, I have lots of files and some of those that I want to block are also subfolders.It is very important for me to hide the existence of certain files/folders when the user does a directory listing. It's also important that the files stay in their current folder (that is, I can't use a workaround which requires moving all the files into a separate folder and then securing that folder).
When I run "ls -al somedir*" (I use the "ll" shortcut, actually), Linux not only list files that match, but also the contents of directories whose name also happens to match.Is there a way to limit "ls" so that it will only show names (files and directories) and ignore the contents of the directories?