General :: Run The Script From A Different Directory The Paths Are Not Read?
Nov 29, 2010
I have written a script which reads a text file and takes out absolute and relative paths embedded in the text file.Then the script looks for a string in some text files mentioned in those paths. The problem I am facing is that since these paths are from my working directory,if I try to run the script from a different directory the paths are not read.
I am using RHEL 4.4. Last time when I reboot my server it generate an error, and mention to run fsck command in repair mode. When I ran, this fix some problems, but after that it generate an error of gdm and X11 services after showing login sceen and getting user name and passwod. But I login via putty from a remote system. So, when I tried to make changes like create directory or file or even tried to make any change in any file it generate an error that " you can not make changes in read only file system".
I tried compiling a simple Hello World with gcc but didn't have any luck. I got this message: Code: junk.c:1:19: error: stdio.h: No such file or directory gcc was installed and configured correctly at one point but I think I changed the the .bash_profile since then. I checked where stdio.h lives. The path is:
I have a CMS that has a brilliant backup option with one flaw, it can only create a full backup in a directory inside the web root. In this case /var/www/site/backups. This is not practical for security as the resulting tar.gz file contains a full mysql backup as well as other items that the general public shouldn't be downloading.What permissions do I need to set so that the directory /var/www/site/backups cannot be browsed to in a browser but can be read / written by the CMS when a PHP script calls it?
I have created directories in root. I am looking for the chmod command to allow all users read and write permissions to a specific directory. I have done chmod 775 for a file but I need this for a directory. This includes permissions on all files and sub directories.
I'm under linux . by default, other user can't read anything under my home directory. let's see my home directory is /home/superman , and I tried to use
chmod +r /home/superman
to let others can acess files under my home directory , but it does not work .
I was wondering what is the difference between directory execute and read permission?Also, how do I recursively remove executable permission from a dir, but just apply it to normal files?
Does anyone know why files in my /tmp directory are not able to rm even using root login? not only that, I can't even chmod or do anything to files in /tmp directory... it always saying "read only file system" warning
I want to populate an array of string by files exists in directory on mounting of file system. I got an entry point for file system mount. I can use either do_mount function or fill_superblock function. But I am not clear about how to call readdir from there itself. I need inputs for calling readdir from fill_superblock function.
I am new to writing shell scripts. So, please bare with me. I am currently trying to write a shell script which will read the directory path as input from user and will traverse the Dir tree to find all available audio and video files. I have tried to write as much as I could but I don't know where I am making mistake as I get some files to be audio file which are actully tar balls. On the second note there are some files which video but script shows them to be audio. And, some video files are completely skipped. I am giving the shell script below so that you can see. I am using two external files as source which I am attaching.
Code:
#!/bin/bash #Let's load the extensions that we want to search for vdExt=$(cat vdExtList) adExt=$(cat adExtList)
We currently have a NFS shared Directory mounted as read-only on our server.This directory contains multiple sub-directories and files. It being read-only is a equirement. Now, we need a directory underneath to be read-write. Is there a graceful way to make that happen? Like a special mount option to use? Basically objective is: /u01 is mounted as read-only and has 3 directories: dir1, dir2, dir3 dir3 has 2 sub-directories- sub1,sub2/u01/dir3/sub2 needs to be read-write, while all other are read-only.
I have not been on in about a month or 2. I have no idea how to list this thread. I am hoping that someone like tex can help out. Being lazy with ubuntu seems to have been badong. Ok, I had 2 physical machines and 3 vms. VM's ran under Vbox-ose via a bridge. OS's are buntu x2 and one centOS box. I installed webmin to make things easy ( i thought ).However, after setting webmin, my I have been randomly loosing PATH's. I mean, one minuet i can run sudo apt-get and the next the whole PATH is gone. I tried a compare of my "home" box's bashrc and bash_profile against the other machins, and outside of some alias for colr and the like nothing seems to stand out. Even if I su - to root I am not getting access to the needed paths. Now, while I could export the correct path, I am more conserned with the why of it all.
I would have thought that as long as my group setting on my ssh users were all correct AND the environment had not been changed, all would be good. I can provide more info if someone wants to help me out with this. HOwever, it drove me to a six pack. 8SI have read the man pages. I have used google. I have checked the logs. The logs by the way showed a lil hammering on one of the boxes for root access. [I]t wasnt me. However, I dont seem to be able to see a time stamp.
Is there some sort of standards file path convention for installing softwares that I could follow through? For example, I just learnt how to build Nginx from source. But the default binary path set by nginx is "/usr/local/nginx/sbin". I have seen a couple of tutorials which they specify the location of the installed binary and it is very different from those usual default paths. Thus, got me thinking whether is there some form of file path convention that I should follow?Is there some kind of list which states where do those packages on Debian.org Repository usually installed to?
I've got a script to recursively create symlinks in my home directory to my settings directory, to keep the files under version control. I would like it to skip files which are already symlinked via a parent directory. That is, if I have these files/directories:
Trying to understand my ruby folder structure? Why my gems are scattered all over and why they aren't recognised commands. I'll explain how my installation looks like first: /usr/bin/ruby /usr/bin/ruby1.8 /usr/bin/ruby1.9.1
The first is a soft link to ruby1.9.1 because the "ruby" command didn't work in the terminal. I did the same with "gem". I installed rubygems through downloading, extracting and then running setup.rb here: (I created the "ruby" folder) /home/pc/ruby/rubygems-1.7.2/setup.rb /usr/bin/gem /usr/bin/gem1.8 /usr/bin/gem1.9.1
I installed a few gems with "sudo gem install" > gem list *** LOCAL GEMS *** compass (0.10.6) haml (3.0.25) mustache (0.99.3) rake (0.8.7)
So far so good? Well not quite, as it turns out the command "compass version" doesn't seem to exist. My confusion grows with each folder I look into. The following path doesn't make any sense to me, for example. Why would it be hidden? Why is mustache the only gem inside this folder? /home/pc/.gem/ruby/1.9.1/cache/mustache-0.99.3.gem
First of all, here is "gem environment", which makes even less sense, because I have definately installed rubygems-1.7.2 like I told you in the first paragraph, but here it shows an ancient version 1.3.7. Why? I installed Ubuntu the day before yesterday. RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.9.2 (2010-08-18 patchlevel 0) [x86_64-linux] - INSTALLATION DIRECTORY: /var/lib/gems/1.9.1 - RUBY EXECUTABLE: /usr/bin/ruby1.9.1 - EXECUTABLE DIRECTORY: /var/lib/gems/1.9.1/bin - RUBYGEMS PLATFORMS: ..... Ruby --version returns "ruby 1.8.7"...........
Also, as it turns out, all gems are installed into this folder (mustache too! even though it already is inside the other folder), just as "ruby environment" claims: /var/lib/gems/1.9.1/gems. But none of these gems work. I can't call any of these, except rake. So here is where I probably made the mistake, I think I used "apt-get install rake" in addition to "gem install rake", because the command "rake" wasn't recognised, and the command prompt suggested it. I may have done so with rubygems too... I'm new to Linux, and I figured that the command prompt knew how to install this stuff properly. It can't be normal that I have to create syslinks all over, right? In Windows I didn't run into this problem.
When I asked about filesystems with compression I got recommendation to try ZFS. Looks like it worth trying, however I find tools that manage ZFS (zfs, zpool) quite overcomplexified - you need to create some volume, then add it, then create filesystem on it. And finally it suddenly created things in root directory like /qqq/test and it uses /var/run/zfs/zfs_socket (strange for a filesystem).
How to use ZFS (with FUSE) without it's complicated things with volumes, just as good filesystem with compression, something like mount -o loop image.zfs /mnt/qqq -t zfs-fuse?
How to setup ZFS as non-root? FUSE usually means "user can use it too" (example: ntfs-3g). I expect something like this:
Can ZFS be more usual FUSE filesystem that I can add to /etc/fstab and user can install and use on its own?
I want to print, using locate, all the paths that contain the element /bin/ but only one instance of each one. If I issue 'locate /bin/' then I have many screens of text with, for example,
/usr/bin/foo1 /usr/bin/foo2 /home/me/bin/foo3 whereas I want to see only /usr/bin/foo1 /home/bin/foo3
That is to say, if in two lines /bin/ appears with the same prefix (in the example above the prefixes would be /usr and /home/me) I only want to print the first line. Can I pipe locate to grep to do this? I've mentioned locate because it does not scan the whole disk.
How do I find paths on ubuntu. I have installed redcar(ide written in ruby similar to textmate) and rvm for ruby. However I cannot locate where the executables are to update my .bashrc.
The Linux File system uses the file path notation to abstract how data is accessed. Path really must be an environmental variable for the applcication that converts the path name to an inode so what is this application/Daemons name?
Is it possible / how can I? I have two directories "money" and "assets". I have two users, "jonn" and "jean". I "need" john and jean to share a login called "post". I want John and Jean to use this "post" login to be able to write a file to folders called "money" and "assets". I don't want them to be able to "read" files in either directory. I tried chmod 400 and that didn't work. They couldn't write a file to the directory.
What permissions can I give / assign so that they can write a file but they can't read any other files? Remember, there are different directories that they will "write" the files to.
No doubt you can change file and directory permissions on files such that when you upload a file via ftp, it uploads fine but isn't visible to the uploader.
I mostly do .Net development in a Windows 7 virtualbox. I use the host for simple things such as web browsing, skype, chat, etc. All things that are fantastically available on Ubuntu which I in many ways prefer. So this has been begging the question for a while: why even use Windows on the host, seems like a Linux host would use less resources (untested) and allow my Windows VMs to run better while allowing me to do my non-development stuff in an interface I prefer.
So easiest way to do this - I downloaded wubi and installed Ubuntu. I installed in it Virtualbox, and then start add and start my VM to get this message: Failed to open a session for the virtual machine VS2010, Could not open the medium '/host/Users/George Mauer/Virtualbox VMs/VS2010/C:/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi; VD: error VERR_FILE_NOT_FOUND opening image file '/host/Users/George Mauer/Virtualbox VMs/VS2010/C:/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi; (VERR_FILE_NOT_FOUND).
You see what's going on? With wubi, the windows drive gets mounted at /host/ but virtualbox is for some reason appending an absolute path! I would very much like to use the same exact VM file since it would retain Snapshots and I would be able to use it in either windows or Ubuntu mode. However, even if I try to simply mount the drives into a new VM I get an error: Failed to open the hard disk /host/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi. Cannot register the hard disk '/host/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi' {guid...} because a hard disk '/host/Users/George Mauer/VirtualBox VMs/VS2010/C:/Users/George Mauer.VirtualBox/HardDisks/VS2010.vdi with UUID {guid...} already exists.
This is especially odd since this worked fine with my recently created Android VM, though this might have something to do with the fact that VirtualBox recently changed their default VM storage locations. Any idea how to fix this? My Linux-fu is weak but I seem to remember from CS class something about symbolic links that might be relevant here?
I am using this command: Code: sed -i 's?,$HTTP_USER_AGENT,?,$HTTP_USER_AGENT."\nFile: ".__FILE__."\nLine: ".__LINE__,?g' *.php to modify a line in my php files. I want to do this recursively from the directory I am in. but I get this message in response: sed: can't read *.php: No such file or directory sed version is 4.1.2. It's important that I only change *.php files, and do so recursively.
I have no ACLs in place yet but want to use a user called ldap-auth-user to bind to the ldap servers directory from the client servers. However I keep on getting ldap_bind: Invalid credentials (49). Error. I know the UserPassword is correct because I can log into a server using that id and password through the LDAP directory. I am guessing it has something to do with the way I created the account.
Not sure what the problem is, but my home directory or /home partition is acting up so that I cannot see the hidden directories in my home directory.
If I type "ls" I get the display of all my files and directories.
If I type "ls -l" I get the display of files and directories.
If I type "ls -a" or "ls -la", the terminal hangs.
Any thoughts? I have tried creating myself a new account and moved all my files over, then changed the ownerships to the new account. However, now the new account is acting the same way.
On RedHat 5 64-bit.I have a group that requires read-only access to the /var directory.I believe someone mentioned SGID and ACL stuff, and I've been researching this solution, but I wanted to check with you all first to ensure there wasn't an easier way to do this. Basically, I just need folks that belong in this certain group to read the contains of any file/directory contained within /var.
I have a problem from time to time. Now is such a time. Nautilus is not able to read/enter my own home directory. It can enter/read ANY other directory, but my own home directory. Killing the Nautilus process, doesn't help. Logging out doesn't help. I need to reboot to get nautilus to read my home directory. Sometimes, it suddenly appears after a couple of minutes, but not always. What is taking so long time or causing the hanging? What should I do?
I want my samba to keep my windows attributes exactly what the user setted in windows I mean if it has read only file in win box and copy it to samba share ,samba keep it read only and same for other attributes but it does not do it now with my configuration:Quote:
[global] workgroup = DOMAIN server string = File Server
I've created a guest user in the group "user." I'd like to limit its read access to its own home directory. However, by navigating through File system>home it's able to read my home directory. I was under the impression that users were limited to their own home directories. Am I missing something, or is there a group I can assign this guest to, to limit its read access to its own home directory? I've read about Pessulus (I use Gnome), but that seems to be geared toward limiting access to applications, not directories.
Ideally, I'd like to create a group that cannot navigate through any files except its own home directory. But it seems that if I try to do that, the guest user will not be able to execute any applications. I've read all the posts (and other forums) I could find about creating such a limited account, but the chroot jail is beyond my understanding. I get the feeling that it's geared toward networks.