General :: Compare Directory Structure Without Matching Data In Files
Jul 22, 2010
What is the best and simplest way to compare two directory structures without actually comparing the data in files. This works fine:
diff -qr dir1 dir2
But it's really slow because it's comparing files too. Is there a switch for diff or another simple cli tool to do this?
I have 2 files to compare and then print out information that match a certain pattern. I know basic scripting and was heading down the path of merging the 2 files together but this is the wrong approach. Would really appreciate a script that can do what is required:
file 1 contains dates, times and ID's: 2010-10-28 10:42 5939697357 2010-10-28 11:56 5919543491
I am writing a shell script that finds all files named <myFile> in a directory <dir> or any of its subdirectories, recursively. I also need to take care of symbolic links that may form cycles, to avoid infinite loops. I am not supposed to use find command for the same
I started writing the code but got stuck. I thought using recursion may be a smart way, but its not working.
I have some data ( seperated by semicolon ) with close to 240 rows in a text file temp1. temp2.txt stores 204 rows of data ( seperated by semicolon ). I want to : Sort the data in both files by field1.i.e first data field in every row. compare the data in both files and print out the rows that are not equal in seperate files. I was trying to do this with excel using vlookup, without a great deal of success. hence, i'm exploring the shell script option.
How do I download all the files form here: [URL]. I am on freeBSD 7.0 and I tried wget with the -r switch and it gives me URL's only. Maybe this is simply not an ftp site I don't know. How I can download all those files with the same directory structure.
I have uShare 1.1a setup to talk to my XBox 360. If I share a directory that has no subdirectories, the video files display on the XBox. However, most of my files are in sub-directories on a different partition - I don't really want to copy them to the share, but uShare doesn't seem to recognise any sub-directories or files contained therein.
I have tried setting up symbolic soft links directly to the video files (although this is a pain, it is better than moving the files)...
Code: ln -s /home/jonftp/TV-Shows/Buffy/Season-1/Buffy-101.avi /home/share/Buffy-101.avi ...but these don't show up on the XBox either.
How can I get uShare to "drill down" the directory structure to list the files or how can I get uShare to follow symbolic links?
I have a drive with an NTFS partition where all the files were deleted. What I'm looking for is a way to rebuild the directory structure and recover the files. I really, really want the directory structure as the partition contains 460 Gigs of data. Normally I would use the tools here: [URL] but I've never dealt with this much data before. Everything there that I've used creates a pretty messy dump however.
I have used ntfsundelete before but only for a few files at a time. I have no idea what would happen if I tried to run it on a partition of that size. I'm comfortable with data recovery but this amount of data is beyond me. I've run ntfsundelete with no args and from what I can tell of skimming the pages of output all the files are fine. The partition has not been written to.
I have a file that I am attempting to use to restore a website for a client. The file is a drupal folder structure where all of the files in the folder (3000+) have been individually compressed into 3000+ individual gz files.
I have figured out how to uncompress all the files using WinRAR but it puts all the files into a single directory, i need the original folder structure maintained.
I've the following file structure that I would like to add to git.
Code:
These are big directories and I don't need them all checked out. I only need the src directory. After I commit the files in the /app/src, it must be pushed to a remote site.
If I want only to checkout the src directory to work on, it's important to create a special file structure in git? For example, instead of doing git init on app general directory, should I do git init on all subdirectories?
Is it possible to checkout only part of a file structure in git?
We copied the directories/files from one filesys1 to filesys2 and since that initial copy have further added more directories/files to filesys2 from another source. How can we compare filesys1 to filesys2 to make sure all files/directories were copied successfully to filesys2? I would like a way to check filesys1 against filesys2 and only show/output when something is missing from filesys2 that exists in filesys1 and not report the additional/extra directroies/files that have been copied to filesys2.
My clearly outdated Linux course I'M using is telling me that the directory structure for building RPMs is in /usr/src/redhat, but on my redhat system, there is only /usr/src/ > debug & > kernels, folders.
I am using RHEL 4.4. Last time when I reboot my server it generate an error, and mention to run fsck command in repair mode. When I ran, this fix some problems, but after that it generate an error of gdm and X11 services after showing login sceen and getting user name and passwod. But I login via putty from a remote system. So, when I tried to make changes like create directory or file or even tried to make any change in any file it generate an error that " you can not make changes in read only file system".
Amarok is nice, and currently the only thing I've tried that will actually play audio from network shares and not lock up. Only problem is that it doesn't seem to list my library by directory/folder structure.. only album/artist/genre or some variation thereof. Is there anything that does list by folder structure?
I would like to find the command that copy my eclipse options to another workspace code...
It doesn't work, and it could be source of error to write the path .metadata/.plugins manually. It certainly a better idea to create a complete script ?
Is there a way to force rsync to not make directories in its destination directory; ie, to simply dump all of the files from the source directory directly into the destination without copying any of the folders that the files were originally in? I tried --no-dirs, but that seems to only be for empty directories.
I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind.Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it.I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root.
I have to set up my development environment under OEL 5.4. The internal directory structure is more or less fixed, the question is where to put the whole application tree in standard FHS for the development phase.For production, the system will be deployed on a server where data is kept with an Oracle 11g database. Development goes with JDeveloper which has its own internal application/project directory structure; however, I whould like to be independent from this. Unfortunately, by carefully studying the web for FHS, if I will adhere to it I only find ready-made packages to put under /opt, for example. No development aspects are mentioned. make some proposal, possibly with respect to best practices.
dir1subdir1subdir2etc.and at the lowest level they contain all of these jpegs that I need. The problem is that I only need some of them. They're named like this:
pic1.jpg pic1_med.jpg pic1_small.jpg pic2.jpg pic2_med.jpg etc.
I want to just grab the ones without the size suffix and copy them all to another set of folders, while preserving the directory structure. The numbering all starts at 1 for each low level subdirectory, so I think that the directory structure is the only way to not get them mixed up.
I know that cp has a recursive option -r but how do I just extract the ones without the underscore? And then how do I preserve the directory structure when I move them over?
jump into a Linux class in college with only 3 weeks left in the course. I thought I would be able to catch on, and go figure, it didn't exactly happen that way. I was given an assignment to do, and I am so far lost it isn't even funny. I need to create a directory structure, set up file security, create a step by step instruction manual on how to copy/delete said files, and create a guide to common Linux commands. How would I create these files in root and share them with the other users? and where can I find a list of common commands and their functions?
I'm working with a dual-boot laptop running Ubuntu 10.0/Windows 7 and a Debian 5 VPS while the OS's shouldn't have much impact on my question.
What I would like to do is create a html page that I can upload to my VPS which lists all of the files/folders on my local 2TB hard drive (Specifically media such as Movies, Music, TV Shows...). The media obviously will not reside on the server, but I would like to at least have a list which will allow me to select, for instance, a bands artist so that it redirects me to the albums in the directory below.
Ultimately, I'm looking for Open Directory Browsing without actually having the media on my server. I have been attempting to create something to this effect using lynx, however, I'm not sure if it can be done with this command or if it's even possible for that matter.
I have a requirement to list files using find command My folder contains below list of files with out extention.I have a requirement to exclude only ABC.123.* type files and list others. Even though files having MNO contains this pattern i should not exclude. Even if file ends with .txt or .doc it should not be excluded. That is ABC.123.1234.txt should not be excluded.But I am not getting what is required. Can any one please let me know if I am doing wrong any where. As per my requirement I cannot use grep, -regex, or -regex attributes to find command.