CentOS 5 Server :: Crontab Creating Blank Files In Root Directory?
Jul 10, 2009
I've just discovered that crontab is creating a new file in the root directory every time it executes a cronjob, and it doesn't erase over the old file so there are thousands of files in the root directory, they have the same name as the script file (appended with a numeral) but are all blank.here is what one of the cronjob's looks like[URL]
The script works fine - it runs every 5 minutes and checks if certain environmental variables are set in place - if they are, it prints an output. If they're not, it dies.
The problem is that when it dies, it still sends me a blank email - one every 5 minutes, every time the cron job runs.
telling it not to send if it's blank?
I'm assuming some type of dirty awk script could do this, but I don't really know my shell well enough to do that.
I'm not good at putting these things into words, so let me write some psuedo code to explain what I mean:
When I setup crontab as a local user (settings shown below), then it can pop-up a xmessage windows * * * * * DISPLAY=:0.0 /usr/bin/xmessage -center "warning message here" but when I switch to root, and setup the same things in crontab, why it cannot pop-up a xmessage windows? Any special limitation for root to use crontab?
I'm using openSUSE 11.4 (x86_64) with KDE: 4.6.00 (4.6.0) "release 6"Everytime I visit a direcotorya hidden file ".directory" is created. How to disable that? Is There a possibility to disable that behaviour only for public_html directory
I have a question about using crontab with /etc/crontab...
I had a cron job that I needed to run as root. At the time I thought that sticking it in /etc/crontab would be a good idea. However, I used the crontab command to edit /etc/crontab, which I guess is not standard procedure? Specifically, I configured /etc/crontab as my local user's crontab (i.e. sudo crontab /etc/crontab) then added my cron job as I would a local user crontab (i.e. sudo crontab -e).
Originally, my cron job looked like this:
30 * * * * root /my/batch/script &> /dev/null
After adding the new cron job I started seeing errors. Something to the effect of "can't find command root" or something similar. So I removed the 'root' user definition from the cron job and the job started running fine. However, because this is /etc/crontab, there are other system related cron jobs that have been defined to run under the root account (e.g. "17 * * * * root cd / && run-parts --report /etc/cron.hourly" runs as root, etc.). So these pre-existing system cron jobs, which up until now have been running smoothly, are now generating "can't find command root" errors. But I think that the system cron jobs _are_ successfully being run someplace because logrotate seems to be working.
So what I _think_ is happening is that /etc/crontab is being run twice: once as the system crontab, and once as my sudoed local user's crontab. When I run crontab -l I see nothing, but when I run sudo crontab -l I can see the contents of /etc/crontab. I am reluctant to delete my sudoed local user's crontab, because then in the process I would be deleting the system crontab, and I do not know how I should restore the system crontab's contents. (I am still not sure as to the most appropriate way to edit the system crontab).
How can I get out of this mess? I want /etc/crontab to go back to the way it was before--running _once_ as the system crontab. As for my new cron job, I'm willing to reconfigure it anywhere so long as I am still able to run it as root. Any ideas? (I am using Ubuntu 8.04 Server LTE)
Actuaaly i am creating watch on one directory in which files are continuously coming.Is there any command which can give listing of all files who have come in last 24 hrs.
I'm setting up a new server and have edited the crontab to a run a script but nothing is happening, is there anything I need to setup to get the crontab working?
I can cause the kernel to panic immediately with the following command. lvcreate --snapshot --name Snap --extents 100%FREE VolGroup00/LogVol00 The last line of the panic message is "<0>Kernel panic - not syncing: Fatal exception" If I create a snapshot of any other volume it works just fine. It only panics on LogVol00 which is my root fs.
I'm running 5.4 after update from 5.3. It didn't work with 5.3 either. This is a 32-bit guest running in VMWare Server 2.0.1 which is running on FC10 x86_64. I've tried the guest in both UP and SMP (2 cores) modes and observed no difference.
how to set up a fake ROOT dns that I would be using inside my virtual test environment with private address range. Basically, I want to create a ROOT server that will only contain information to two separate dns servers that are authoritative for its respective domains/zones.
I can't seem to find anything on the net about this subject. Basically, how is a ROOT server configured and how should the fake hint-file for authoritative dns-servers be configured?
I want basically domain.xx and domain.yy to be able "find eachother" by using response from ROOT server. I know I could set up forwarders to respective domain on each of the virtual DNS server instead, but I want to experiment a bit the way I stated above - with a "real" (fake) ROOT server, if it is possible!
I'm quite new to linux, but I've managed to grasp some basics. Now my intention here to create a virtual directory, which I resorted to creating an Image File so that I can mount it and have my folder have a dedicated storage. I will mount this image as a loop device. Well it's not much of a problem, but I would like to know whether this is suitable. Say I want to create a 25GB Image.
Is this recommended? I'm using block size as 1G which is really huge, so I was wondering, if this is actually recommended. From what I read, some said that it's only advisable to use 4096k or lower, but what I found was that these suggestions are very dated (year 2003), and it is now 2010, so I would like to know if it makes any big differences.
I've been using Ubuntu for over 5 years. This time I decided to upgrade UNR to the latest 10.10. I am now running it from USB to try it before installing. Excuse my ignorance, but whatever happened to the Terminal? I cannot find it anywhere! I think this release is not going in the right direction if one of the most important tools in Ubuntu is hidden from an average user.
Also, how do I change to the root directory in the files and folders? or at least to the higher directory structure.I won't be installing UNR 10.10 unless I figure out these BASIC things.
I am running WHM and CPANEL on centos.I would like to upload a file to the root user directory. To be honest, my only experience uploading and downloading files with FTP has been with domain related accounts that were set up under WHM to be managed under CPANEL. This is quite simple, because all you do is set FileZilla or Dreamweaver up with the FTP address of the domain account and the username and password.How can I do something similar to FTP a file into the root or home directory?
I'm using OpenSSH 5.5p1 on Fedora 15. I'm trying to get a chrootDirectory to work. Specifically trying to figure out why I can't write files to a sub-directory of the chroot directory. I created a user test_user and created a group called sftp. I added test_user to the sftp group. I edited /etc/ssh/sshd_config as follows:
Code:
Subsystem sftp internal-sftp Match group sftp ChrootDirectory /home/sftp_users/%u X11Forwarding no
I am using Linux 64 bit Redhat Linux. I am trying to setup simple crontab as follow...1. Edited crontab file using crontab -e2. Listed the file once to verify it using crontab -l. This will display as.. 18 5 * * 2-3 ksh $HOME/testScript.sh > $HOME/testscript.out3. Logged in a root and restarted cron deamon using "/etc/init.d/crond restart"As per my understanding now my testScript should start running at 5:18 am Thuesday
I got an task assigned to me, i have to create new ssl key, csr & crt files using openssl. But the file name must be of this kind (*.aaa.xx.aa). When I tried the file name starting with * its not accepting the file name. But when I tried with the file name starting with . its getting generated.
I am looking for Windows Search equivalent looking for file name patterns (not file contents but file names)....
I am aware of "globbing" and wildcard recursive search functionality in ls but I am still not capable of finding files under directories.
for example: I want to find all files starting with a string lsnr* under root directory / and any sub-directories.....
ie I want to look for files like lsnr*.* anywhere under / and any sub-directories under / such as /dir1/dir2/dir4 and dir1/other/dir/someotherdir/sub-dir etc.
so if I have /dir1/lsnrcontrol and also have /dir1/dir/2/dir3/lsnr-tinit.dat then I want to list the files names etc.
I have installed CentOS 5.2 on a couple of machines. They work fine, but the screen goes black (blank) after several minutes. It's a screensaver of some sort. I'd like to disable it. I searched the CentOS forums, searched Google, searched Yahoo, and I can't find any way to disable this blank screensaver.
I have created a ftp user in centos 5,but it got all permissions to delete files in other location,view the entire directory and create any folder in every place. How to deny this permissions to the particular user.And please help me to give permissions only to a specified location given by the root.
The filesytem is (or was ) 500Gb ext3. We had a small electrical power failure yesterday, the server do not stop but the disk array (SCSI Raid 5 disk system) restarted. This morning, the filesystem was not available (I/O error) so I reboot the front end. The fsck failed with the message: root inode is not a directory There are nearly 400Gb of data on this filesystem. Any idea to solve the problem ? Google always point to a commercial software or windows software...
I installed PHP 5.3 from remi repository and now some PHP pages end up as blank pages. Joomla pages load with no problem but iDevAffiliate pages end up as blank pages. Could it be something in the php.ini file? I have no idea where to look. Any ideas?
I am setting my cron to work. I am in the roo account/ So first I type as vi crontab -e. Then it ask me type "visual" for normal mode and do that then I type the following as below 1 * * * * root usr/local/testClient/runClient.sh>/usr/local/testClient/cron1.log and press esc type wq. Then I restart the cron service /etc/init.d/crond stop and /etc/init.d/crond start. Lastly when I type crontab -l it tells me no crontab for root.
I had installed PHP using yum install php. I am trying to use the pdf_new function to create pdfs from existing text files, but I get this error PHP Fatal error: Call to undefined function pdf_new()I have noticed that when I run the phpinfo() command, I cannot find the PDF phrase at all. My php.ini file does not even have these two linesextension=php_pdf.dll extension=php_cpdf.dll kind of command I should use if I need to build PDFs using PHP on Centos 5?
I am new to linux, running a brand new centos 5.2 server. One application I want to use it for is to serve as a network host for a game my friends and I enjoy. Normally, to run the game in host mode you call the binary and pass it a port number (along with other options). To host a second instance of the game, same thing different port, you get the idea.
After doing that the binary runs in your window and dumps to stdout, so if you want it to run 24/7 you have to come up with your own strategy like nohup. Fair enough, now, I'm trying to coax the game into restarting automatically upon reboot. The most correct way to do this seemed to be to write a script for init.d so that's the road I traveled down. Now, to strain the metaphor, the pavement has ended and I'm stuck in the sand.
Here begin my questions: I've been following the structure of other init.d scripts and I notice they all seem to call the function daemon() (contained in /etc/init.d/functions) to start their services. Looking at the structure of daemon() I see that you can pass it a user and a pidfile. The user part seems to work fine, but no pidfile is created. Let me be more specific.
Like the other scripts, I explicitly touch /var/lock/subsys/game-port on startup, which works fine. However, all of those other services seem to have a pidfile in /var/run and mine doesn't. They don't create it explicitly in their init.d script therefore I assume that some other process is creating the pidfile. At first I thought it would be the call to daemon(), since you have the option of passing it --pidfile, but that doesn't seem to work.
Are the services themselves creating the pidfile? If that's the case then I have more complications because the game binary apparently doesn't do this. Second question but probably related to the first. None of the other init.d scripts I looked at seem to do anything special to detach their services from a particular terminal session, therefore I didn't think that I would need to either.
Again I thought this was something the call to daemon() might accomplish, but either I'm wrong or I'm doing something wrong. I can probably work around this with nohup or appending '&' or something, but I'm just curious that other services like crond, sshd, named, etc., don't seem to do this. Are they determining this behavior from within the binaries themselves? I hope this is clear, as I said I'm new so I may not be getting all of the terminology correct.
Wondering if the internal network will benefit of connecting to a local DNS server, rather than my ISP dns server. Can I create this local DNS server, without having an external domain, pointing to my server ?
All I want is faster lookup of known hostnames, both internally (hostnames) and externaly (cnn.com etc..)