I'm planning a NFS share for a small enterprise (25 NFS clients). I need to create a directory structure but I'll need to set up differents permissions (rw/ro) to some directories of the tree. I wonder if it's possible to grant access using groups IDs, so that would be ideal for this application. Is it possible? I was thinking that I would kneed some kind of centralized user info, such as NIS or LDAP. Is that necessary?
I want to make a webserver with multiple users allowed to login through SFTP to a specific folder, www.Multiple users are added, lets say user1 and user2, and all of them belonging to the www-data group. The www directory has an owner www-data and a group www-data.
I have used chmod -R 775 on the www folder, but after I try to create a folder test through my SFTP server (using Filezilla) the group of the directory created has only r and x permissions, and I am not able to log in with the second user user2 and create a directory within www/test due to a lack of w permission to the group.
I also tried using chmod 2775 on www directory, but without luck. Can somebody explain to me, how can I make it so that a newly created directory inherits the root directory group permissions?
After what feels like weeks have tinkering around trying to get a Samba file server set up, I've finally given up! I have 4 drives and 2 groups:
1) Dev - Available to all users in both groups (normal and admin) 2) Misc - Available to users in admin group only 3) Admin - Available to users in admin group only 4) Accounts - Available to users in admin group only
Drives 1 and 2 are working fine, with the correct access rights. Drives 3 and 4 can be browsed by admins only, but no changes can be made at all - files & directories can't be renamed/moved/deleted. What is most confusing is that Drive 2 is set up exactly the same as Drives 3 and 4. The process I went through to get them working:
I tried using Nautilus - nada (under root no less). Tried using file browser (nada again) Tried going to "places" and the directory I wanted - right click, permissions won't let me change squat. The folders I want to change are shared folders on my network at home and sometimes I transfer files between computers to different places. Can't do it tho, cuz of the permissions. Is CHMOD the answer? If so, how do I do it? For instance, In terminal, I issued the command (as root) chmod 777 movies I thought this would allow any device in the house to write to this directory, but the permissions didn't change at all. So what do I have to do?
On my Ubuntu machine I simply run Nautilus as root and it allows me to do this. So what's different in Fedora?
i have an ntfs mount that i wish to change permissions of individual directories.i have mounted many ntfs volumes successfully, mounting is not the issue. the issue is that when mounting, i need to specify 'blanket' permissions, owner, group etc. i have no idea how to change permissions for individual folders.
I try to use rsync for backing up some directories and I have to following problem: some files have permissions that prevent me from running rsync under my own user id. So I run it under root using the option "-a" which according to the man page should preserve the permissions, owner and group information:
However, when I run this under root, the directories created in the backup location get user root and group root while ordinary files keep the original user and group. What am I missing here? How can I get rsync to preserve the user and groups for all files, including directories?
Here is a command to illustrate my problem Code: sudo rsync -a /home/youruser /tmp
If you try that and terminate with Ctrl-C after a few seconds, there will be a directory /tmp/youruser where the directories contained within are owned by root group root.
I recently accidentally deleted a wrong directory of images from a USB memory stick, which I managed to successfully recover using Photorec . Photorec created a couple of directories (which I asked it to put in Documents), which i was then able to re copy back onto the USB disc. My problem now however is that I do not need that almost 4GB of files on my PC but I cannot delete them since the files are all root owner, and being somewhat new to ubuntu I am not sure of how to go about changing the permissions and deleting these files.
I am running Ubuntu 9.04, and wish to share a folder to be accessed without logging in via Windows Vista. If I set up the share through the nautilus right-click menu and enable "Guest Account", the share is inaccessible. The folder shows up, but it fails to mount. Vista says that it can see the computer, but not the shared folder.
The folder is
/home/william/shared
The only way I can get it to work is if I change the permissions of the folder /home/william to allow Others to access files.
i have installed linux4 on vmware and now i am to copy any file but not able to paste it in any directories and when check the permissions there is no write permission for any of the directories .Not able to use chmod to change the directories permissions.
Is there a way to restrict users that are logged into the shell via SSH/Telnet/SFTP from using the 'cd' command to move into certain directories, yet not use the chmod command to do it? For instance, restrict users logged in from accessing the /var/www/ folder but have it still accessible using a web browser. Also, would this defeat the purpose since they could just wget from it if its still web accessible through a browser?
I'm just wondering: I know that umask sets the default file permissions for files, however I want to know if there is anyway to set default file permissions for newly created directories.
For example, I want my user to create new directories that anyone can access and modify (777) but I want the new files the user creates to be 755 (read by everyone, written only by user).
Need help maintaining permissions across multiple directories. Have Ubuntu 8.04 Hardy Heron. O/S installed, updated and running with no problems.Why is it that my administrator user id doesn't seem to have root permissions to create directories? I am trying to setup hosting 3 separate websites and therefore create 3 separate directories to manage all associated files for the 3 websites. Also, I am attempting to read through the tutorials located at:URL...
I've got a small issue that when a Windows user creates a new folder through Windows Explorer (from the menu or by right clicking) the new folder is only accessible to that particular user. Example: user SABKAR (member of the HR group) creates a new folder called MarcTestMenu in a shared Samba directory through Windows Explorer:
[Code]....
At this point user MORAMY cannot copy a file or open the directory MarcTestMenu. MORAMY gets a 'not accessible' error message in Windows. If I su to the Samba box and issue this command:
[Code]...
how I can get the correct default permissions when users create directories through Windows?
I have a fileserver running openSUSE 11.2 and samba services for file access from MS Windows based workstations. My question relates to changing default permissions on files and directories created from the windows clients.
Following are extracts of the /etc/samba/smb.conf file :
Even with the above entries, sometimes there are files and directories created by the windows clients having permission
i want to know what is use or benefit of using s and t permission?i have used them but could not understand its uses.please explain me with suitable example.Also tell me about umask command to flag on s and t.
I'm trying to rsync files and directories from a RedHat linux host(v 4.5 & 4.7) to a Windows server 2003R2 Standard Edition with cygwin running. I'm executing the rsync command from the cygwin shell. The transfer involves rsync'ing approximately 1 TB of data from the linux server to the windows server. After about 280+GB of data transfer, the transfer just dies.
There seems to be no particular file or directory that the transfer stops at. I'm able to rsync GB's of data from other linux hosts to this cygwin server with no problem. Files and directories rsync fine.The network infrastructure is essentially the same regardless of the server being rsync'ed in that it is GB Ethernet running through Cisco GB switches. There appear to be no glitches or hiccups across the network path.
I've asked the folks at rsync.samba.org if they know of any problems or issues. Their response has been neutral in that if the version of rsync that cygwin has ported is within standards then there is no rsync reason this problem should happen.I've asked the cygwin support site if they know of any issues and they have yet to reply. So, my question is whether the version of rsync that is ported to cygwin is standard. If so, is there any reason cygwin & rsync keep failing like this?
I've asked the local rsync on linux guru's and they can't see any reason this should fail from a linux perspective. Apparently I am our company cygwin knowledge base by default.
i am in need of linux help. iam at college and i need this back/restore script to pass this final part of an assessment. i require a backup script that will not only backup but also restore files to the relevent directories. e.g. users are instructed to store all wordprocessor files in a directory named wp. so i am needing to create a backup directory and 3 directories within that and some files within the 3 directories and then back them up ot restore them. l know i should/have to do this myself by been trying to get/understand info for the last few days and came up with zero.
I'm having trouble setting up a vsftp server correctly. What I want to do is allow a number of users to log on (no anonymous user) and each of them to be taken to their own "top level directory" from which they can not escape.
I've got most of this working, but I can't find a way to automatically transfer each user to *their* working area. The "local_root" directive doesn't quite do what I want as everybody has to share the same working area (potentially users could interfere with each other). On the other hand I don't want each user to work from their home directory because there are loads of special files there that I don't want users playing with.
To add one extra compilation, I'm also running an html server on the same machine. One of the directories the html server can see is one of the ftp area root directories (So what I'm trying to do is give one special user ability to ftp files onto the html server. Other users must *NOT* have this ability)
I am writing a script, in that my requirement is, if all the fill types stored in one directory from that we need to separate different different directories based on the file types.
for example in a directory(anish). 5 different types files 1- directory 2- .txt files 2- .sh files
like that and my requirement is the (1- directory is moved to one new directory(dir) which we are given in the script)and (2 .txt files are moved to another new directory(test) which we are given in the script)and ( 2 .sh files are moved to another new directory(bash) which we are given in the scrip)finally the directory anish should be empty..using bash script.how it is possible !!
I'm running an Ubuntu 9.10 Linux server. I'm trying to find a way to backup the machine while it is running and from what I see, this eliminates the disk clone utilities. All of the disk clone stuff I have seen for Linux requires that you reboot into a special live CD.So my question is this, what is the best solution for backing up the system while it is running? Also, I don't really care about the OS config too much, I just want to be able to keep my stored files and my programs that I have installed on it.
have had a server running for a very long time using Ubuntu Server 7.10, and I think it's passed time that I upgraded.I'll be installing fresh, and I've already backed up /var/www (as well as a home directory with a few files)I've only used this as a Web / SFTP / file server. Might there be any other directories that would be good to backup? I set it up so long ago and have made a few changes along the way.
I want to get all the directories from a remote server using ftp. I know how to use mget for files, I would like to know if there is a similar way to get the whole directory with the files included obviously.
I am setting up an SVN server (svn+ssh) that will be used by students at the university where I work. I was considering in the beginning, one single repository and eventually creating directories for each project inside the repository. It seems to me now, that it is not very secure way of doing things. The directory on the server will be with rights 770 and this means that every student can come on the server and sweep out the whole repository.
Also mistakenly or not, every student can 'svn delete' the whole repository, which could be a nightmare to recover from. An issue might be to create groups and then assign users to groups and then create many repositories and each repository to be assigned with group. This means that I will have to manage tens or hundreds of repositories -- maybe not very common task. What is an optimal solution for this working environment.