Is there a way to compare an array in a while conditions?
I have one array that contains the results of some search and if the script has found all the items, then it should stop, so my idea is to have a while loop � la:
I am fairly new to Linux and was needing some help on a comparing more than 2 files. I am try to come up with something that would compare at least 10+ different files to a master file and give me an output of what is missing.
Example would be: a.txt, b.txt, c.txt, d.txt compare each of them to the master.txt file, than output the missing text for each file into new file.
I came across comm and diff commands, am I looking in the right place or is there a much easier way of doing this?
Is there a way, besides writing a PERL program, to read each line one by one in file A and tell if this line also exists in file B? Can this be done via a shell script?
1. similar nos in both the file 1 and file 2 > output= File 3; 2. In file 1, but not in file 2 > out put= file 4; 3. In file 2, but not in file 1 > output = file 5;
The command sdiff is giving output with symbols > < | etc, and the such output file is not clear and ready to print. I want to print directly the output files. AND ALSO TELL ME WHERE I HAVE TO WRITE AWK PROGRAMS AND HOW TO RUN IT.
Im trying to compare two files and I only want to display the user names that are in the first file and not the second.
So I have one file named final.txt (which contains every user name and only the user names in a list no other information)
Then I have another file Over1.txt (which only contains certain users that have different permissions This file is also setup differently with the user name and some information about the user after the user name.
I need a way to compare final.txt to over1.txt so that I will only display the names that are in final.txt but not Over1.txt
Ive tried using diff and comm but just cant seem to get it two work correctly. Im not sure if im missing a option or what.
I have two text files i want to compare the differances between but i dont wnat all of them, there is only about 30lines of relvent text i want to compare.
I've got an interesting challenge for the shell scripting wizards here. I've got a mySQL dump of three files for my amarok database with the intention of copying some files to my media server (cover art) so that I can keep the server the server and not rely on my local machine.
Step 1: Identify any cover art files on my local machine.
I did this with:
Code: mysql -u amarok -p amarok -e "SELECT * FROM images WHERE path like '%.kde%'" > cover_art.txt Output looks like this:
[Code]....
What I have here now is the ENTIRE album list in my collection -- and something to compare the IDs in Step 1 against. I'm going to stop here and will update the thread as I get past this stumbling block. "ID" in cover_art.txt = "image" in albums.txt... straightforward enough, right?
So the question is this: how do I create a simple shell script that will loop through the IDs in cover_art.txt (i.e. characters 0 -> 4 -- it will always be a 4 digit ID) and then search for that ID in the Albums.txt file.
3861 user 20 0 904m 128m 33m S 0.7 6.4 1:11.52 xulrunner-bin 1323 user 20 0 1555m 95m 31m S 13.5 4.8 4:06.87 gnome-shell 3494 user 20 0 1028m 50m 21m S 12.8 2.5 1:43.32 evolution
I just wondering what is the difference between RES, SHR, and VIRT.
1) The VIRT always seems to be higher. Is this using the paging file system. (virtual memory on the harddisk, the swap memory)
2) Is the RES memory the actual physical RAM memory?
3) Is shared memory sharing memory with other processes?
4) Just a final question. As I am running on a HP Mini 210, memory and CPU is a resource I don't have a abundence of. So if was to compare for example 2 difference browsers i.e. firefox and midora. What should I brench mark between to 2 to find what one uses less resources?
I'm trying to write a script that takes two arguments, the first argument is a number, and the second argument is a filename. The shell script should indicate if the file's size is BIGGER or SMALLER the number provided. this is what i have sofar, am i on the write track, i'm hoping its just a problem with my if command
if [ $1 -h $2 ] then echo "$1 is bigger than $2" else
I have recently backed up all my documents photos etc to an external hard drive. What is the best way to check that everything has copied? I have tried diff but it was not very clear.
This is a really odd bug I can't seem to figure it out. Basically, commands like ls can see all the files in the current directory, however when I go to execute the file it will give errors like "file not found", even when it most obviously is. If you look at my command history in the screenshot, you can see I can ls into a directory and see it's contents. When I try to run the file, I get the "no such file or directory" error.
However, if I type simply 'vm', I can't use tab completion to complete the directory name, and my third command is me typing 'vm' and hitting tabtab, it lists a bunch of vmware specific tools instead of the subdirectory name. I can then ls and see my current directory contents, and it will list only the single subdirectory. However, then I tried to use the full filepath from root to run the file, still to no avail. If anyone has any insight,
I want to compare 2 name files, for instance. I got the package foo1.tgz and the foo2.tgz. and I want a script in bash that detects foo2 is newer than foo1 and delete foo1. Can it be done for managing collection of slackware packages.
I am currently writing some convenience methods for my terminal in my bash_profile and am sure if what I am writing is "the best way". I figure a good way to verify whether what I'm doing is right or not would be to find some source code of more established programs and see how they do it.My question then is, where can I find this code on my Mac? An example is, with Macports installed, where is the source code that opens the port interactive console when I type nothing but "port" in my shell?(I added Linux in the title even though I am on a Mac because I assume the answer would be the same for both)
I have a file with joker character patterns: ./include/* ./src/* etc. From the current directory I would like to recursively get the list of files that do not match these patterns.
I use Linux. I coded a screenshot program some time ago and now I have 9 GIG of screenshots, 60000 JPEGs, most of them look pretty similar, and I have 300 MB of disk space remaining.
What are some good ways to start to compress batches of them (or all of them) in the background given the limited space? The problem with compressing the folder all at once is that I wouldn't have enough disk space for that. It seems the process needs to be broken down into chunks. So maybe something like: Get a list of all the files Add a chunk of the files (say, 20) to a compressed archive. Once it is done and saved successfully, delete the chunk of files
I'm trying to write a script to process some images and rename them, or more specifically, renumber them so that pg_0001.png becomes pg_0.png, pg_0002.png becomes pg_1.png, etc. I've looked at the rename command and sed, but I'm not really very familiar with these. It should also be part of a bash script that I've written for the processing of these files - this is what I have so far: