and my current working directory is sub1link, is there a quick way to either: change directory to link source parent (i.e something similar to cd .. but take the user to /dir1/ change directory to link source (i.e switch from /dir2/sub1link/ straight to /dir1/sub1
I have a FAT32 SD card with a file on it, that, viewed in Windows the filename consists of a long string of nonsense. Viewed in my Android phone's Linux terminal, ls -a shows nothing in the directory. When I try to delete the parent directory with rm -rf deleteme, it fails with "Directory not empty". When I try to delete/move in Windows 7, it says the filename would be too long and/or Explorer crashes. Windows disk check doesn't find anything wrong. How can I delete this?
I am using fedora 13. When I list the root directory with the command: 'ls -la'. I see the parent directory symbol as '..' So, which is the parent directory for root directory?
Create the following directories: parent/child Navigate to child and create a file named child (this is an executable file in my case, not sure if that makes a difference). I need to create two "link to executable" links in the parent.
I had assumed that this would work: ln -sf ./child ../child1 ln -sf ./child ../child2
But that creates a "link to folder" (./child) in the parent directory. If I change it to: ln -sf -t.. ./child child1 ln -sf -t.. ./child child2 I get an error, "ln: '../child': cannot overwrite directory".
If I do it from the parent directory (which I cannot do, this is part of a Makefile recipe): ln -sf ./child/child ./child1 ln -sf ./child/child ./child2
It works. Note that I cannot alter the names of any directories or files. How do I create the links when the current directory is the child?
I'm able to use the following to remove the target directory and recursively all of its subdirectories and contents. find '/target/directory/' -type d -name '*' -print0 | xargs -0 rm -rf
However, I do not want the target directory to be removed. How can I remove just the files in the target, the subdirectories, and their contents?
I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update.
So I setup the permission and owner group like this
drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data
where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory.
I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this
-rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt
When robot tries to update new-file.txt, it fails due to lack of file permission.
I'm not sure if setting umask helps. In anycase the new files does not really follow it.
$ umask -S u=rwx,g=rx,o=rx
I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.
I am trying to move all the txt files with a script from multiple directories to one directory, adding the parent directories of the files to the file names.It's a little complicated to explain, but i hope the script i have so far explains what im trying to do better:
I've recently installed virtualenv + virtualenvwrapper on Linux Mint 10 LXDE. For convenience I've added the standard WORKON_HOME settings to my ~/.profile
Then I've noticed that workon does not work after login, which means the above commands were not run. If I source ~/.profile then it works. I'm really not sure what could cause .profile not being run? I've checked and I don't have .bash_profile or .
When I SSH to a certain Linux host, although my default shell is tcsh, the .cshrc file under my home directory is not sourced at all. I can't understand why this happens because from my understanding, if I'm using the tcsh, the .cshrc should be sourced anyway!
The problem is I can't use $0 as reference because the script is only sourced not executed. I also don't want to hardcode the path because the location might change and there will be more copies. Is there an easy way to create this information from within the the sourced bashrc file? I use Gnu bash 2.05b on Suse Linux 9.
I've found several posts discussing how to do this in with the terminal, but none exactly fit what I am trying to do. And since I'm still very new, I was hoping for some help.
I have a parent directory called "Music." The subdirectories all start with "artist", some go further as in "artist/album/cd1". So right now the structure varies in the following ways code...
How can I move all the files (or the file types that I choose) to the parent directory "music"?
(By the way, for any who are interested, this is so that I can use an external hd with a PS3. ("playstation 3"--for anyone who was in my predicament searching the threads)
I am runnung ubuntu 9.10 desktop edition as a server. I am using a FTP client program to upload some files(index.html, background.png, etc) and everything is fine with that. And currently all my files are in /home/myname/ folder. What I want is whenever I log in with my ubuntu account in the FTP client program, I can actually see the list or contents of the very root directory.
In other word, I can see every folder like /bin, /boot, /etc, /root, so on in the FTP software and I can download it. I don't want to allow to access the parent(or root) directory. Is there any possible way to set up the sutff?
I am facing a problem in Windows due to a virus called Newfolder.exe which creats files with the same name as it's parent directory and an extension .exe and this happens for every directory in the entire hierarchy in the infected pen drive. The antivirus detects them, but is sucking slow. So I thought this is a good opportunity to use the concepts of the all mighty shell script to remove those as they follow the same pattern. Say my complete path is
Code:
/home/pkd/fol1/
The virus would have created an file with complete paths
Quote:
/home/pkd/fol1.exe
If fol1 has two more directories fol11 and fol12 Then there would be two more .exe(virus created) in the following path
I keep getting a segfault in compiz. I've tried everything! This is from a fresh install without and accelerated drivers (I have ATI)
Quote:
Code:
**Switching to Compiz window management** /usr/local/bin/compiz-indicator:99: GtkWarning: Can't set a parent on widget which has a parent menu.append(kill) /usr/local/bin/compiz-indicator:100: GtkWarning: Can't set a parent on widget which has a parent menu.append(start)
I've mucked through and figured out how to mount a windows share. I can access the folders I was looking for, but the windows share was not what I thought it would be. I was looking for the specific shared folder. Instead I got a root level parent directory that included the folder I wanted, and a couple others.
smbclient -L <ipaddress> gives me a parent directory on the root
First question: Can I mount a specific folder within a share? Second question: Could somebody define share? I thought it was the specific shared folder, but that doesn't seem to be the case.
I have directory a and directory b. They are big. b is almost identical to a. "almost" means that 4-5 files differ, and I don't know which they are. I want to copy b over a, but only the files that differ. i'm in bash.(no, I can't simply delete a and replace it with b, because 1) a is version-controlled 2) a full copy (or a mv) would take too much. I want to copy only the files that differ).
I'm trying to setup an alias, that when I change to another directory, any directory, it will also display all its contents like ls -al:Well, that doesn't work. I guess it's an issue with the use of wild-cards.Maybe I should define a new, so far unused, name for the alias like cdl for example.Would be great if someone could help me. I search in several examples for bash aliases but couldn't find the right solution.
After installing a Dell Optilex 280 desktop with Centos 5.4, I configured sendamil, squid and all the extras I needed for a proxy and email server. All was fine until I switched the machine off. I had changed the resolution before and all was working fine. Now I have lost my gui , the system starts O.K but goes blank after "starting udev". I can use webmin to administer the server from a nother desktop/laptop but the monitor is totally blank. I tried most configurations on the /etc/X11/xorg.conf file even tried to create a new one using X -configure but still nothing changed. I cant afford to reinstall because of the time it takes to update the box, our broadband connections are so expensive this end of the world (Zimbabwe) and not so efficient. I have looked at the logs and tried to google but the solutions wont work for me? I suspect there could be an issue with the intel graphics adapter?
I am curious since "Run command as a login shell" is UNTICKED (I think for all new users) under Gnome Terminal -> Menu Bar -> Profiles -> Edit -> Title and Command , BUT .bash_profile is sourced. I thought .bashrc should be sourced instead ?
I was doing a tutorial on scripting in bash. I saved my file on the desktop and I cannot seem to get to that file to execute it. Here is what I have been using:
I try cd Desktop says that there is no such directory.
So I am using Komodo edit, and it thinks that my current directory is /home/username1/ Yet my files are in /home/username1/workspaces/ruby/project1
Why won't Komodo recognize that my working directy is that of the current file? Same with situation when I use Komodo Edit to launch a command ("ruby [complete path+file name]"), ruby can't find its require dependencies unless I give a complete path because it too seems to think it is working out of /home/username1/
When I add an external device it is automatically mounted by Fedora to /media. Does anyone know if it is possible to change the default mount directory to something else (like /mnt) ?