I have a folder named Pictures that contains a bunch of .jpg files. My problem is that they all have randomly numbered names, then there is a duplicate of the file that is random numbers then the letter a right before the .jpg.for example, there would be 123.jpg and 123a.jpg, where 123a.jpg is just a resized version of 123. What i'd like to do but have NO clue how to, is to have a script or something go through my Pictures folder, then copy the ones that end in a.jpg to a folder called Resized, and ones that dont have that to a folder called Originals. That way my Pictures folder will be in tact, and i'll have copies of them all separated out.I have to do this all through the CLI on a machine, maybe I dont even need a script and can just do it with a slick command?
I need a script that will take all the files in a given directory and create new monthly sub-directories and sort all the files based on the creation date into the appropriate directory.For example, all files created between 01/01/09 and 01/31/09 will be placed in 'JAN-2009'
I was hoping to get some pointers on how to rename files based on database entry. I got hundreds of thousands of files that has GUID name assigned to it. only way to find out the file name is to look up the database table. Its obvious that this is not efficient. I couldn't find any tutorials on how to do this. Please point me to right direction. A starting point would be very helpful.
Originally Posted by Kenny_StrawnPlease wrap [CODE] tags aroung any code posted here. The full source that way could still be posted.I am trying to copy all the files in the directory based on the modification date (i.e created on Dec 29). Not able to find the proper command for this. This is what I have tried.
I'm writing a bash shell script that among various other things will traverse through a directory with hundreds of files and rename those who match a pattern found in a config file. It's expected that only about one in ten files will actually match, and those who don't, will simply just be ignored for this purpose.
This should for instance cause the file "dBase program file December 1987.prg" to be renamed "Clipper source code December 1987.prg", and conversely "C++ source August 1996.cpp" to be renamed "C source code August 1996.cpp" etc.A sample file such as "Random Data File.dat" should not be renamed here since it's not mentioned in the config file..What is the quickest, most elegant way to do this in bash?I am thinking of using bash's built-in regex matching combined with the /bin/rename utility, but don't quite know how to get started to catch this..I guess there are plenty ways of doing this in perl and elsewhere as well, but since this has to integrate into a pre-existing bash script, that's what I'm looking for.Anyone out there with a spare moment to offer a hint in the right direction?
I have very little linux experience. And need some help with a bash script. I need to a script I can set cron to run to sort files out of a holding folder into final folders. It doesn't necessarily have to be bash, but I think it would be sufficient for this. File names are formatted as such when created: Dest-Date-Time-CID-Destination# I want the files to be moved from a all in one holding folder to a folder structure like this.
So the script will need to make directories based on information in the file name which is delimited by single dashes. Then move files from the holding folder to the newly created "sorted" folders.
I have a directory listing with many subdirectories having many files. I want to recursively search for the oldest 5 files starting from the base directory and not 5 from each subdirectory. I am writing a shell script which sorts them using ls -lRtur|egrep "txt|jpg" > /tmp/file1 Now from this /tmp/file1 file I want to sort the files same as what the ls -ltr command does that is oldest file time to newest file time first. How do I sort based on Linux time stamp? The files itself also have Linux timestamps embedded in them So I can sort based after extracting them as well if it is easier. My /tmp/file1 has entries like below.
I have set up a SSL site for my default Apache server. But I want to set up multiple SSL sites for multiple IP based as well as Name based Virtual hosts. Is there a way where in I can include definitions for SSL certificates and keys within the Virtual Host directive in the httpd.conf, so that I can specify separate key and cert file for every Virtual Host.
I'm extracting data from a xml file writing it to separate files then combining the results as a csv file.The problem is keeping the separate files in sync line by line.When a grep does not action I would like to put in a blank line or something to keep the lines in order.When the "<title>" is missing as in as in the first"<programme </programme>" that's where I need somethingto write to the file as dummy data to increment the line
I have a file that contains a couple of email addresses and I want to extract the usernames ( Letters before @ symbol ). How can I do that using sed/awk.
I know cut will work, but the current environment doesn't allow me to use cut command. I can use either awk or sed.
I want to save streaming music into hard disk. I know mplayer/vlc can do that. But mplayer/vlc stores as a single large file. What i want is to save every songs separately. Is it possible?
I've got a quick grep question. I'm trying to work out a command I can use to locate all of the files in a directory that have sql database connection details. I want to do it by looking for the strings "localhost" and the name of the database.find . -type f -exec grep -l -E '^(localhost|DATABASE_NAME)' {} ;
I'm trying to do something here:: I'm writing a bash script, I want to [open a new terminal and run a bash command in it] inside the script. I tried to use this, but apparently I get syntax errors.
I ran GDB on a program and am receiving following errors, Code: anisha@linux-uitj:~/junk> g++ -g jk.cpp anisha@linux-uitj:~/junk> gdb a.out GNU gdb (GDB) SUSE (6.8.91.20090930-2.4) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later [URL] This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-suse-linux". For bug reporting instructions, please see: [URL] ... Reading symbols from /home/anisha/junk/a.out...done. (gdb) b readline Breakpoint 1 at 0x400b90: file jk.cpp, line 19. (gdb) r Starting program: /home/anisha/junk/a.out Missing separate debuginfo for /lib64/ld-linux-x86-64.so.2 Try: zypper install -C "debuginfo(build-id)=591af1afa33f255704fb6a60859b93d00e205302" Missing separate debuginfo for /usr/lib64/libstdc++.so.6 Try: zypper install -C "debuginfo(build-id)=62220ad5c8941afb5d332c0c47d32f8beec8ac50" Missing separate debuginfo for /lib64/libm.so.6 Try: zypper install -C "debuginfo(build-id)=57fc1891d8d9f419fb8c7fc06a8285563b53a47e" Missing separate debuginfo for /lib64/libgcc_s.so.1 Try: zypper install -C "debuginfo(build-id)=0206e11fa8ca0db0633073adcbf1349a7871e1dc" Missing separate debuginfo for /lib64/libc.so.6 Try: zypper install -C "debuginfo(build-id)=c5a3dfd66bf61fcdec9bc22153b2fbd0d6697960" can't open input file (null) Program exited with code 01. (gdb)
Using: Open SUSE 11.0 64 bit, KDE 3.5.9 (release 49) and KDevelop 3.5.1. Problem: A singleton was created in subproject A, and so an object file is created in subproject A. In subproject B, I want to use that object file. We have not been able to find a way to link that object file created in subproject A with subproject B. Also, the subprojects are in different directories. We created a symbolic link to the ".h" and ".cpp" files in directory A. And the project compiles and links just fine after adding the symbolic link to the header and cpp files in subproject B. My concern is, that two objects of that singleton will be created. The whole idea of making a singleton is so that there is only one instance at a time.
Now I would like to create a third file which contains only those packages which are present in package-a.txt but NOT in package-b.txt. The file should look like this:
Code:
package2 package4
Note: The world "install" is also to be removed for all packages. Using diff command I could get something like this:
I have 2 routers, each are assigning IP with DHCP on. One router is plugged into cable modem second router is downstairs plugged into first router. Wire runs in WAN of second router. Each router has its own IP subset. First router assigns IP's to 192.168.1.xxx second router IP's to 10.0.0.xxx
I know I can use the second router as an AP with DHCP OFF. BIG BUT though is my wifi verizon phone got no IP assigned when running like that and wirelessly connecting to the second router. Laptops were just fine. SO, I reconfigured second router with its own subset IP being handed out. Now verizon phone is perfect.
How can I share files between connected PC's using it this way?
I streamed video through a my computer with mediatomb yesterday. The problem is that now, I got these huge log files. I am running out of memory (less than 1 gb left) as we speak. They're filled with ufw entries, but my question is:
I read somewhere about a program called logrotate that were supposed to keep logs from getting to big, is this wrong and should mediatomb generate 3 separate log files with 5gb of data each for just 2 hours of streaming?
15 this is a sentence containing various words and spaces 34 this is a another sentence containing various words and spaces
cat file2.txt
2 this is sentence1file2 6 this is sentence2file2 54 this is sentence3file2
I would like to join these 2 files. The result should look as follows :
cat joinedfile.txt
2 this is sentence1file2 6 this is sentence2file2 15 this is a sentence containing various words and spaces 34 this is a another sentence containing various words and spaces 54 this is sentence3file2
==> so the joined file must be sorted on the first number. Any ideas how this can be achieved ?
I have a problem with snmp answers being empty or having spaces.
What I already have:
#get all interface indexes (if you wonder - I'm working for a cable company and different cablemodems have different number and types of interfaces):
The problem is the physical address which is sometimes empty and the description which has spaces. So I'm doing 2 snmpgets which is slower than 1 snmpget (sometimes I have up to 18 interfaces).
I'm trying to explain it a bit simpler.
Interface 5 gives me back the following lines:
Ethernet CPE Interface
Now the first line should go into variable ifadm, 2nd line should go into variable ifoper, 3rd line should go into variable ifspeed, 4th line should go into variable iftype, 5th line (which is empty) should go into variable ifphys and finally 6th line (which has spaces) should go into variable ifdescr
I'm trying to create a separate thread for my program which basically polls using the read command. However this new thread seems to block the main thread, anyone know why this could happen.
In main I call this function pthread_create(&mainEventThread, NULL, GenericEventThread, NULL);
which calls /*New threads start function */ void *GenericEventThread() { short int i, nError = -1;
[Code]...
I've used pthread_self to check that a new thread is being created, so why is the while loop in one thread blocking the main thread from running, I haven't used the join function anywhere in my code.
Is there any way to filter the output of a command based on the values on the output columns. For example i execute du -h on directory with many files. Now I want to filter the output based on the size (i.e. M or G or K ). The filtered o/p should contain only M(megabytes) or G(gigabytes) and also all columns.
I'm looking for a version of Getopts for Java that isn't licensed under the GPL and accepts long options (i.e. both -h and --help). My code is licensed under BSD and I don't really want to change that just because a module uses the GPL...
I'm currently trying to organize a media server so that things will be in some kind of logical order rather than the current setup of dumping everything of a certain content type into a single folder. However, the size and diversity of content within these disorganized folders precludes me doing things manually. Does anyone know of a program or script that could sort the files into folders based on part of a filename
I want find a bunch of log files and delete ones that are older than say 5 days. Ideally I would then like to add this my crontab to run once a day.
The log files are in /var/log and are owned by root. They have a standard naming convention which is [date]RootCronRsync-backupHOME.log An example file is 20100621RootCronRsync-backupHOME.log Trying to put together a bash script to do this I think I need something like
Code: find /var/log/ -name *RootCronRsync-backupHOME.log -mtime +5 -exec rm {} ; However if I try this without the -exec rm (ie to see if I can find the right files first) I get the following error find: paths must precede expression: 20090405RootCronRsync-backupHOME.log Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]