We have a script that FTP files 3 times a day, once ant 02:30, 04:00 and 13:00. Once the process runs it puts a copy of the file sent in the processed folder. What I'm trying to do is check to see if the files are there and if not send an alert /email. The file names are IVF_20100806_*.150, PLAZ_ 20100806_*.151, TRAN_20100806_*.152 and TRAN_20100806_*.151
when we do enter on a folder it take some time for loading the folder depending on the no of entries in the folder . If the folder has more entries it take more time to load and if less no of entries then correspondingly less time . the delay in loading the folder varies due to reading of the folder entries in advance . SO what i want to know is that what is the MAX no of entries read in advance while opening a folder in linux and also how can we calculate this
I'm trying to find a proper command to move a certain set of files according to date/time range. I am thinking that the command should be something like:
I am trying to find a nightly backup if it was successfully copied over, rename it and curl, but it's always passing the check even if the file is older than specified. From the command line it does as it should. Example is here;
Code: find /backup -type f -mmin +4440 -exec echo "found" {} ; - nothing returned (good). Then I change the time
what i wanted to do was find all the files with a specific name from a tree, sort them by modification time and have their directory appended to them so that i knew where they were (because they all have the same name). i tried a whole bunch of different things and finally did this:
this did the trick pretty well, but as you can see it is far from elegant and i think i'm doing some things wrong and kludgy
first thing i tried was "ls -lRt | grep world.sav" which worked except i couldnt distinguish the files because there were no directories. that took a lot of looking till i accepted i couldnt make ls print directories as well and append them to the files somehow that their relationship would be clear. i tried piping ls to find, doing it in reverse, passing them from grep etc. etc. until i read some more stuff online that got me using gawk and sort. the questions:
1. is there some other, more elegant and simple way to do this kind of detection and sorting?
2. is there any way to use a pipe after using exec? the semicolon seems to prevent this entirely, forcing me to use an intermediate file as above. i could just remove it later, but i'd prefer a straight piping.
I wrote a hack script that outputs the following every so often: Code: 01/04/11 10:33:02: 97,1413,1447,2860 I must leave the data format the same --but I want a special number from it. In this case it's 97 and it's always going to be the first in the 4 columns of comma delimited items. I can extract with this:
Code: cat datafile | awk -F" " {'print $3'} | awk -F"," {'print $1'} But that's really sloppy. Can someone point out a better way of doing this (with awk) and tell me why?
I am trying to do a find/grep/wc command to find matching files, print the filename and then the word count of a specific pattern per file. Here is my best (non-working) attempt so far:
Is there a way to specify to find that I only want text files (and not binary files)? Grep has an option to exclude binary files, so I thought find probably has a similar feature, but I've been unable to find it.
I know how to search for normal files but can you let me know " How to search for 5 setuid files on the system. Also explain, for each file, why setuid mechanism is necessary for the command to function properly"
I have to write one Shell script where i have to find one word in current generated log.Log name has specific format like 'NAME_DDMMYY_HHMMSS'.log.Each time i have to go and check the word in newly generated log.How can i pass the newly generated log name in my Script?
I am currently struggling with one of my tasks.I was asked to find a way how to determine how much time an _already running_ process is spending in user and kernel space.E.G. <some tool> <pid>[Control] + [c]<pid> spent 12.1 seconds in user and 1.52 seconds in kernel space.Does something like this exist? Basically I guess I am looking for something similar to time, except that the process is already running.So..a) Is there a tool which fulfills this task?b) Is there a way to write your own software which does the job? Is it even possible to code something I am looking for?I recently found strace -c -p <pid>, but well, this is not exactly what I was looking for.
Finally went open source at home, but must be able to load QB to work with data files already for me to load into program. Will my system at home allow me to load QB Pro Premier 2009? Or alternative apps that will let me load and work with the data?
For like windows you can resore your os to a state of peace kind of. If you messed up your vital files you could go back in time and restore you computer to a selected time. I was wondering if you could do that for ubuntu
I'd like to measure network latency for SNMP GET request. There is a free command line tool time which can be used to find timing statistics for various commands. For example it can be used with snmpget in the following way:$ time snmpget -v 2c -c public 192.168.1.3 .1.3.6.1.2.1.2.2.1.10.2IF-MIB::ifInOctets.2 = Counter32: 112857973real 0m0.162suser 0m0.069ssys 0m0.005sAccording to the manual, statistics conists of:
the elapsed real time between invocation and termination, the user CPU time (the sum of the
Currently I am using the following command to copy and add date and time stamp to files.cp /home/work/file.grn /home/xfer/rename_`date +%Y%m%d%H%M%S'.grn.If I have five files for ex: file_1.grn, file_2.grn,file_3.grn ...can I copy those five files a different directory and with a different file name and with date time stamp to it.The output filenames name_1_yyyymmddhhmmss.grn,rename_2_yyyymmddhhmmss.grn,rename_3_yyyymmddhhmmss.grn ...
i have two bash script files namely, "a.sh" and "b.sh". Both the files contain some commands to run, how can i actually run both files at the same time, instead of running the file one by one?
I have 2 files to compare and then print out information that match a certain pattern. I know basic scripting and was heading down the path of merging the 2 files together but this is the wrong approach. Would really appreciate a script that can do what is required:
file 1 contains dates, times and ID's: 2010-10-28 10:42 5939697357 2010-10-28 11:56 5919543491
A directory /test/test1 is created & under /test1 there are some files & subdirectories with data in it. I had copied the files (text & script files) with command as,
cp -irv /test/test1/.* /test as per the requirement but what i see in destinarion i.e, /test that no files or directory has been copied in /test & the files/directories is also removed from source i.e, /test/test1
so my query is how can i get the files/directories back with data?
We have few log files.We wanted to roll off log files during our portal down time. We'd like to keep four generation of the log files in the system.for examople:name of the log file is :/opt/IBM/activity.logwe wanted to cuttoff and keep 4 generations of actvity log in the system via script.
There are many time zone files accessible from the command line that don'thow up in the GUI ("system-config-time"). How do I add these time zones to the GUI
What is the best and simplest way to compare two directory structures without actually comparing the data in files. This works fine: diff -qr dir1 dir2 But it's really slow because it's comparing files too. Is there a switch for diff or another simple cli tool to do this?
I wonder capability of awk to manipulate data in consecutive multi files by read one batch file.for example I have files: data1.dat, data2.dat,data3.dat and listfile.txt
I copied a back up of my windows 'my documents' fold and all of its' sub folders into my linux (Mint Debian) Documents directory. I found that many of my files can be found in more that one directory so, what I want to do is to find all the dups and deal with them. Is there a good linux application to resolve this 'duplicates' problem. (I don't want to touch the linux system files.)