General :: Check If Two Paths Are Pointing To The Same File
Oct 6, 2010
I've got a script to recursively create symlinks in my home directory to my settings directory, to keep the files under version control. I would like it to skip files which are already symlinked via a parent directory. That is, if I have these files/directories:
Is there some sort of standards file path convention for installing softwares that I could follow through? For example, I just learnt how to build Nginx from source. But the default binary path set by nginx is "/usr/local/nginx/sbin". I have seen a couple of tutorials which they specify the location of the installed binary and it is very different from those usual default paths. Thus, got me thinking whether is there some form of file path convention that I should follow?Is there some kind of list which states where do those packages on Debian.org Repository usually installed to?
The *.dbf files (DBase III Plus files) have a header (metadata) and follow with n fixed records. I'd like to make a directory entry (like a symbolic link) that point to the fixed record area into a .dbf file. Is it possible in linux? The request is motivated by access a .dbf file from a Firebird SQL Database using CREATE TABLE EXTERNAL FILE '/tmp/mydbf.dbf' ( ... ); but this command only works on fixed records.
The Linux File system uses the file path notation to abstract how data is accessed. Path really must be an environmental variable for the applcication that converts the path name to an inode so what is this application/Daemons name?
Vista Recovery Windows 7 GRUB Extended -->Fedora 12 (ext4)
so, I shrunk my recovery in Windows 7 successfully, and booted into my Fedora 12 live cd to run Gparted, and move the partitions so that the free space could go towards fedora, I did such, and then I couldn't expand the partition to my dismay. Next, I woke up this morning, tried to boot to fedora to run SSH, grub loaded, but when I tried to boot fedora, I got the "File system check failed" error, and when I tried 7, it just went to a blank screen with a single "_" in the top left-hand corner.
My test server is going well for the past 2 months. I have learned a lot from searching the net for how to's and forums on questions I have. My next task on the to-do list is enabling ssl on my mail server. I have the ssl setup with an automatic redirect from http to https.It is working fine with a minor issue.I have 2 domains and several subdomains on the server. Since I have enabled ssl, it seems for any of the domain/subdomain links I type in with a https://, it takes me to my mail server site. How can I have it set to only one secure link to my mail server?
How do I enable/disable a stick pointing device in Ubuntu (9.10)? How can I configure clicking and scrolling with the stick pointer? (I can't click with the stick and scroll the page.)
I've got an old SuSE box that is a SVN server using apache for ldap authentication. The suers are not able to log on for some reason so I am trying to find out where this box is pointed to authenticate. I do not know SuSE or ldap well enough to know where to look to find out this info.
For some reason, my Slackware 13.0 system has multiple problems. When I do ldd /usr/bin/X | grep libpixman I show a libpixman which is NOT in /usr/lib.
If i have two domins [URL] and [URL], can i point it to same IP Address in DNS?.I had already added namevirtualhost in my Apache.If possible, is there any risk,disadvantages.
I'm trying to crawl a directory on a website and basically download everything in it. The structure is simple enough(but there are also multiple folders), but there is one thing that makes wget choke up.Both of the links work, but they are both the same thing. So wget will download the same file twice. How can I make wget ignore the first one? but this doesn't seem to actually do anything. It will still download the duplicate URLs
I have scenrio where i have to check first whether the files exists or not then count of records should not be equal to zero and file should be of current day not the previous day then only process my next task.There are three files totally.please let me know how to write script to achieve the same.
I have not been on in about a month or 2. I have no idea how to list this thread. I am hoping that someone like tex can help out. Being lazy with ubuntu seems to have been badong. Ok, I had 2 physical machines and 3 vms. VM's ran under Vbox-ose via a bridge. OS's are buntu x2 and one centOS box. I installed webmin to make things easy ( i thought ).However, after setting webmin, my I have been randomly loosing PATH's. I mean, one minuet i can run sudo apt-get and the next the whole PATH is gone. I tried a compare of my "home" box's bashrc and bash_profile against the other machins, and outside of some alias for colr and the like nothing seems to stand out. Even if I su - to root I am not getting access to the needed paths. Now, while I could export the correct path, I am more conserned with the why of it all.
I would have thought that as long as my group setting on my ssh users were all correct AND the environment had not been changed, all would be good. I can provide more info if someone wants to help me out with this. HOwever, it drove me to a six pack. 8SI have read the man pages. I have used google. I have checked the logs. The logs by the way showed a lil hammering on one of the boxes for root access. [I]t wasnt me. However, I dont seem to be able to see a time stamp.
I am witing a file upload program in perl where i need to upload a wav or a gsm file and save it as a gsm file.How can i make sure that the uploaded file is a wav or a gsm sound file and not an executable malicious script or something.
I have a site that I login to to check updates. It does not have RSS because users need to authenticate themselves before getting access to the page. Is there a way to write a script that can login to the page and check whether the HTML has changed and then send me an email?
I have a small doubt regarding Assembly file compilation. I have two .s files. When I compile two .s files I am getting corresponding .o files. But when I compare the both .o files with diff command, it is resulting that two files are differing. How could/ what are commands we should use to understand the difference between two .o file's output.
Occasionally Ubuntu runs a file check, and I assume repair if necessary, at start-up. what do I type into Terminal if I want to run a file check without waiting for the automatic file check to start? The reason I ask is that my system wouldn't boot last week and after several attempts to reboot, the automatic file check came into play and corrected whatever was wrong. This process of rebooting my system several times before Ubuntu fixed itself was very time consuming and frustrating. I dare say that there is a command line to trigger this file check.
create one tar.gz file that contains my /home, /etc, /root directory.
a) The process ended with a 88GB file size (which is ok) but with the following message.Code: tar: Exiting with failure status due to previous errors.I have searched a little but I could not find what went wrong.
b) What are the limitations of tar and gz for backups. Of course I fully understand that they can not be used for differential backups (if it is called like that)
c) Let's say that my backup will be a file of 100GB and I want to see the contents of the .tar.gz. In kde there is a program called ark. Can ark handle so big files? Does it use my hard disk (eg. /tmp) to uncompress the file so to show me its contents? It might be the case that might be the compressed file is much bigger than the left space on the hard disk?
d) How can I do an integrity check when my tar.gz file is created?
Trying to understand my ruby folder structure? Why my gems are scattered all over and why they aren't recognised commands. I'll explain how my installation looks like first: /usr/bin/ruby /usr/bin/ruby1.8 /usr/bin/ruby1.9.1
The first is a soft link to ruby1.9.1 because the "ruby" command didn't work in the terminal. I did the same with "gem". I installed rubygems through downloading, extracting and then running setup.rb here: (I created the "ruby" folder) /home/pc/ruby/rubygems-1.7.2/setup.rb /usr/bin/gem /usr/bin/gem1.8 /usr/bin/gem1.9.1
I installed a few gems with "sudo gem install" > gem list *** LOCAL GEMS *** compass (0.10.6) haml (3.0.25) mustache (0.99.3) rake (0.8.7)
So far so good? Well not quite, as it turns out the command "compass version" doesn't seem to exist. My confusion grows with each folder I look into. The following path doesn't make any sense to me, for example. Why would it be hidden? Why is mustache the only gem inside this folder? /home/pc/.gem/ruby/1.9.1/cache/mustache-0.99.3.gem
First of all, here is "gem environment", which makes even less sense, because I have definately installed rubygems-1.7.2 like I told you in the first paragraph, but here it shows an ancient version 1.3.7. Why? I installed Ubuntu the day before yesterday. RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.9.2 (2010-08-18 patchlevel 0) [x86_64-linux] - INSTALLATION DIRECTORY: /var/lib/gems/1.9.1 - RUBY EXECUTABLE: /usr/bin/ruby1.9.1 - EXECUTABLE DIRECTORY: /var/lib/gems/1.9.1/bin - RUBYGEMS PLATFORMS: ..... Ruby --version returns "ruby 1.8.7"...........
Also, as it turns out, all gems are installed into this folder (mustache too! even though it already is inside the other folder), just as "ruby environment" claims: /var/lib/gems/1.9.1/gems. But none of these gems work. I can't call any of these, except rake. So here is where I probably made the mistake, I think I used "apt-get install rake" in addition to "gem install rake", because the command "rake" wasn't recognised, and the command prompt suggested it. I may have done so with rubygems too... I'm new to Linux, and I figured that the command prompt knew how to install this stuff properly. It can't be normal that I have to create syslinks all over, right? In Windows I didn't run into this problem.
When I asked about filesystems with compression I got recommendation to try ZFS. Looks like it worth trying, however I find tools that manage ZFS (zfs, zpool) quite overcomplexified - you need to create some volume, then add it, then create filesystem on it. And finally it suddenly created things in root directory like /qqq/test and it uses /var/run/zfs/zfs_socket (strange for a filesystem).
How to use ZFS (with FUSE) without it's complicated things with volumes, just as good filesystem with compression, something like mount -o loop image.zfs /mnt/qqq -t zfs-fuse?
How to setup ZFS as non-root? FUSE usually means "user can use it too" (example: ntfs-3g). I expect something like this:
Can ZFS be more usual FUSE filesystem that I can add to /etc/fstab and user can install and use on its own?
I want to print, using locate, all the paths that contain the element /bin/ but only one instance of each one. If I issue 'locate /bin/' then I have many screens of text with, for example,
/usr/bin/foo1 /usr/bin/foo2 /home/me/bin/foo3 whereas I want to see only /usr/bin/foo1 /home/bin/foo3
That is to say, if in two lines /bin/ appears with the same prefix (in the example above the prefixes would be /usr and /home/me) I only want to print the first line. Can I pipe locate to grep to do this? I've mentioned locate because it does not scan the whole disk.
How do I find paths on ubuntu. I have installed redcar(ide written in ruby similar to textmate) and rvm for ruby. However I cannot locate where the executables are to update my .bashrc.
I have written a script which reads a text file and takes out absolute and relative paths embedded in the text file.Then the script looks for a string in some text files mentioned in those paths. The problem I am facing is that since these paths are from my working directory,if I try to run the script from a different directory the paths are not read.
I have a text file with a long list of RPM's. I need to check if each RPM is installed. I'm sure I can cat out this file and run "rpm -qa" against it, but I'm having trouble with the syntax right now...
I have been using linux for a bit of time but would still class myself as a noobie. I dont know excatly what I am looking for but I can best describe it in a sanario. I have started creating bash scripts which are looped indefinetly which checks afor files to process. my problem is, is there any way to check if the file is complete, as if the file is large and is being coppied from a different volume the file may still be copying or if the file is being uploaded by a user over samba/nfs, and if the file is still copying the process will most likely fail.
i have problem during boot my F11 , the problem is :
Code:
checking file systems /dev/sda7 : superblock last time ( etc... ) /dev/sda7: Unexpected inconsistency ;run fsck manually (i.e,without -a or -p option) ***an error occured during the file system check ***dropping you to shell:
So I want to run command through ssh but also run a if check in bash to see ifa file exist. I know that to run ssh commands you do ssh user@server YOURCOMMANDbut if i need to run an if statements, how would this work??