In my code printf seems to have a problem with elements that have the same letters but a space inbetween. For instance "new foo", "newfoo" are the same for printf
Although having used Ubuntu for a good couple of years, I'm still a total beginner when it comes to scripting. However, what I need to do should be fairly straightforward:
Importing images from my digital camera, both RAW "originals" and JPG "copies" end up in the same folder. I typically flip through the JPG:s in Image Viewer and remove those that I'm not interested in. Now, this leaves me with the tedious job of going though all the RAW files in the folder manually to get rid of those too! It sure would be wonderful to get Ubuntu to do the work for me...
The script would simply need to go though all the RAW files in a folder one by one, check for a corresponding JPG file - and if there isn't one, remove the RAW file. How could I accomplish that?
I'm using libxml2 to handle/manipulate some XML files. In order to check the consistency of a XML file, I have a DTD and I'm using the xmlValidateDtd method to compute the check.
However, when an error occures during the check (for example an attribute is missing in a XML tag), then libxml2 writes the error on the stdout/stderr. For exemple:
Code:
/home/XML/FreeFour.xml:18: element CA: validity error : Element CA does not carry attribute maxlength
The method return the right result (true or false depending on the check result), but occurring errors are written on the stdout/stderr, and I actually don't want that.
I have a script that generates a bunch of output, including the expansions details provided by: set -v -xI am trying to pipe everything that is displayed to a file, in addition to displaying it on the screen. I've managed to get stderr and stdout into the file, but the expansions are only printed to the screen. Here is what I have so far:sudo -u <user> source my_job.sh |tee my_log.txt 2>&1
So that when I grep on the local file again later, it can be printed out with original log lines. Otherwise, the log lines will be dropped and lines becomes concatenated into a single line, e.g., if I rewrite the script in this way, echoing the $result is not a good idea..
is there some workaround that I can save it to a variable rather than file but still keep the eol? That will simplify my script and don't need to do all those I/Os!
I have a bash script that calls a java class method. The method returns a string to the linux console when run independently. how can I assign the value from the java method to a variable in a bash script?running the script: java -cp /opt/my_dir/class.method [parameter]
PU12829,24869;PD15733,24869;PD15733,19785;PD12829,19785;PD12829,24869; PU4599,20915;PD9924,20915;PD9924,18898;PD4599,18898;PD4599,20915; PU12829,24869;PD15733,24869;PD15733,19785;PD12829,19785;PD12829,24869; PU4599,20915;PD9924,20915;PD9924,18898;PD4599,18898;PD4599,20915; PU1723,3423; #this line is ignored to short
[Code]...
What I'm trying to do is while true, cut each line from file that begins with PU and thats longer than 12 characters and write to a increasing numbered file for each line. Stating with object1 etc.
I am trying to process a column separated data file, with a few bash command. For example, I have
Code:
file1 aaaa yes file2 aaaa no file3 bbbb yes
Let say I want to create new file with the output of first column and do something else with the output of 3rd column. Of course there are many ways to process this data file, but I wish to know by using awk, how could I do it. I'm trying:
Code:
awk '{system("touch $1")}' datafile
but the shell command will not able to get the awk '$1' output. How do I get this done ? And for another question, if the data file contains the variable name of a shell variable, how could I make use of it during a awk output ? For example I have a datafile1:
Code:
server1 yes server2 no
And in another server declaration data file, I got this datafile2:
Code:
server1=xxx1 server2=yyy1
And in my awk script, I want to achieve something like (the syntax is definitely wrong, just to demonstrate what I assume it will like):
I have got a script with an outer and inner loop. The inner loop issues loads of echo's which need to be redirected to a log file determined by the outer loop. The obvious solution is to redirect every echo to >$LOG and set LOG in the outer loop.
Code:
for f in $FILES ; do LOG=<logfile> for l in $LINES ; do
[code]....
it is possible to map stdout to $LOG in the outer loop without having to redirect every subsequent individual command output?
I am not sure if that Subject really explains it, basically I have a script that executes a CLI java-applet that requires a passphrase from the user. I can easily execute this by issuing the -p argument followed by the passphrase however that shows up on possible logs or at least on the results of the ' ps ' command. If you do not supply this -p argument it provides a new line with the echo " Enter Passphrase: " and asks for input.
how can I provide a result/input for the Passphrase request and is it still possible to throw this application in the background with the ' & ' following the command? I have seen a few examples that show a /bin/expect that expects a result and sends a command however I would like to refrain from any extra dependencies. Example of Regular Execution of application:
I have wrote a 1 line command that parses a file, locates the IP Address in the file and then trims the output the way I want it, and then sorts numerically and by uniqueness and then >> appends to output.txt
I can get all the IP's into 1 file "output.txt", but what I am really looking for is some type of way to create a text file, for each IP it finds labeled xxx.xxx.xxx.xxx.txt and also put that ip address into that file..
I am writing a bash script that utilizes the output of another script (which I will refer to as script#2.) Script#2 is not owned by me, I cannot modify it. All of the output from script#2 is blue, which makes it difficult for me to read.
I would like to have the output of it changed to grey. Is there a way I can do that in my script? A command I can pipe the output to?
Edit: One other question related to this. I put a trap function in my script that works well. Script#2 essentially runs a tail -f. When I ctrl+c to stop it, it stops script#2 and never calls the trap in my script. Is there any way I can work around that?
I have a problem with snmp answers being empty or having spaces.
What I already have:
#get all interface indexes (if you wonder - I'm working for a cable company and different cablemodems have different number and types of interfaces):
The problem is the physical address which is sometimes empty and the description which has spaces. So I'm doing 2 snmpgets which is slower than 1 snmpget (sometimes I have up to 18 interfaces).
I'm trying to explain it a bit simpler.
Interface 5 gives me back the following lines:
Ethernet CPE Interface
Now the first line should go into variable ifadm, 2nd line should go into variable ifoper, 3rd line should go into variable ifspeed, 4th line should go into variable iftype, 5th line (which is empty) should go into variable ifphys and finally 6th line (which has spaces) should go into variable ifdescr
I have a set of bash scripts that I'm running that automatically build a set of packages for me and redirect their output into logs. Basically, I have a bunch of lines that are something like this: ${CONFIGURE_DIR}/configure &> ${LOG_DIR}/log or cd ${CONFIGURE_DIR} && make &> ${LOG_DIR}/log, etc.
This is supposed to make the entire process silent. However, sometimes with some packages some output leaks to my console (either stdout or stderr). I'm thinking that maybe the configure scripts/make are executing commands within new shell instances that don't inherit my redirect, or something to that effect.
Another reason for thinking this is that in another part of my script I detect errors when running make by testing with "if [ $? -ne 0 ]", and if the redirect leaks to my console and also the leaked output indicates that the build failed ("make: Error" and so on), then my $? test fails (i.e., it thinks that $? == 0, whereas a failed make should return a non-zero value). It's as if my original script can't "see" the results from child commands executed from later scripts.
I want to access the timestamp field of the packet being sent or received. I am not getting clear idea as to which ioctl I should use, and how it should be used in the program. Anyone explain rough flow of the program for accessing the timestamp.
I've implemented a python script in conky that shows my stock portfolio.But, in the output of last updated timestamp, I get a time several hours in the past. The url for fetching stock data is: [URL]This is a norwegian stock, and I also live in norway. So the timestamp is not translated to the stock market of where it came. I can't find any 'localizing' stuff in the url either.Now my question is this: The script puts the time into a variable, the varable now contains ex 11:23 Is there any way I can add 6 hours or so to this variable
I got a directory with files in it like: 2006-07-01.foo2007-08-04.foo I need to update the timestamps on these files using "touch -t 200607010000 2006-07-01.foo" on each file in the directory so I came up with the following one liner:
for i in `ls -1`; do touch -t `ls -1 | sed -n 's%([0-9]{4})-([0-9]{2})-([0-9]{2})(.*)%1230000%p'` $i; done
My goal was to use sed and get the timestamp for touch and then loop through each file and touch with the timestamp.However the script, not giving me the results I intended. Can anyone chime in on what I am doing wrong?I have been banging away at this for a couple of hours now and am clueless on what it could be. I also tried another variant such as:
for z in $(ls -1 *.foo); do echo $z $(for i in `ls -1 *.foo | sed 's%([0-9]{4})-([0-9]{2})-([0-9]{2})(.*)%1230000%p'`; do echo "$i"; done); done
I am using makefile to complile all C Programming files. But certain files are not getting compiled and hence its object file is not getting generated. This is happening due to files haven't been modified for a long time. It seems that compiler knows that its object file is there hence no need to complie it actually it is not.
I have a list of about 425 servers that are mostly redundant. I need to weed out the duplicate names so that I have a count of only the unique server hostnames. What is a good command to do this?
I have a directory listing with many subdirectories having many files. I want to recursively search for the oldest 5 files starting from the base directory and not 5 from each subdirectory. I am writing a shell script which sorts them using ls -lRtur|egrep "txt|jpg" > /tmp/file1 Now from this /tmp/file1 file I want to sort the files same as what the ls -ltr command does that is oldest file time to newest file time first. How do I sort based on Linux time stamp? The files itself also have Linux timestamps embedded in them So I can sort based after extracting them as well if it is easier. My /tmp/file1 has entries like below.
How can I remove the (X) part from the end of all those? For example, when I cat the file which contains those, I just want to see those lines without the (X)...
I am trying to grep multiple numbers from file, grep does have the -f option for that.
Code: grep -f <`seq 500 520` /etc/passwd I know this could be done with
Code: for i in `seq 500 520`; do grep "$i" /etc/passwd; done But my question is fare more behind this example. It is possible to redirect one command output which will be treat as a content of file for another command ?