I am running ubuntu 10.04 as a web/file server. I have set up several cron jobs in the past which until recently were executing normally. However, a few days ago, cron jobs stopped executing after a restart. I was able to fix the problem by deleting and reentering the cron jobs, but only until the next restart. ps -ef lists cron.
Summary:
1.) Cron was working --jobs executed without error and restarts worked fine
2.) Cron jobs stopped executing after a restart about a week ago --ps -e still lists cron
3.) Removing all cron jobs and reentering them by copying and pasting fixes until next restart
4.) Cron job syntax is known to be correct since they were executing before this problem arose.
I have and entry in a crontab for my user (appadmin) that when it executes it does not start with the proper path. It needs to start as the appadmin user as appadmin owns all the directories for glassfish. However, once glassfish restarts, the hudson application cannot find the default JDK. I get an error. If I initiate the restart via command line, all works as it should. I believe it has something to do with PATH in the crontab but am not sure what I need to set the PATH to in crontab.
I am trying to get a process ( SARG access log report tool ) to run every 10 minutes but I cannot get it to work. Here is the content of the crontab file ______________________________ SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command
I have a question about using crontab with /etc/crontab...
I had a cron job that I needed to run as root. At the time I thought that sticking it in /etc/crontab would be a good idea. However, I used the crontab command to edit /etc/crontab, which I guess is not standard procedure? Specifically, I configured /etc/crontab as my local user's crontab (i.e. sudo crontab /etc/crontab) then added my cron job as I would a local user crontab (i.e. sudo crontab -e).
Originally, my cron job looked like this:
30 * * * * root /my/batch/script &> /dev/null
After adding the new cron job I started seeing errors. Something to the effect of "can't find command root" or something similar. So I removed the 'root' user definition from the cron job and the job started running fine. However, because this is /etc/crontab, there are other system related cron jobs that have been defined to run under the root account (e.g. "17 * * * * root cd / && run-parts --report /etc/cron.hourly" runs as root, etc.). So these pre-existing system cron jobs, which up until now have been running smoothly, are now generating "can't find command root" errors. But I think that the system cron jobs _are_ successfully being run someplace because logrotate seems to be working.
So what I _think_ is happening is that /etc/crontab is being run twice: once as the system crontab, and once as my sudoed local user's crontab. When I run crontab -l I see nothing, but when I run sudo crontab -l I can see the contents of /etc/crontab. I am reluctant to delete my sudoed local user's crontab, because then in the process I would be deleting the system crontab, and I do not know how I should restore the system crontab's contents. (I am still not sure as to the most appropriate way to edit the system crontab).
How can I get out of this mess? I want /etc/crontab to go back to the way it was before--running _once_ as the system crontab. As for my new cron job, I'm willing to reconfigure it anywhere so long as I am still able to run it as root. Any ideas? (I am using Ubuntu 8.04 Server LTE)
I have been having trouble setting up a daily backup script with cron. It would basically never worked. Searched the net for answers but didn't find anything. I finally figured it out !! When root crontab is edited the execute flag is removed from #/var/spool/cron/crontabs/root. I change it with #chmod a+x /var/spool/cron/crontabs/root and all is good.
I am using Linux 64 bit Redhat Linux. I am trying to setup simple crontab as follow...1. Edited crontab file using crontab -e2. Listed the file once to verify it using crontab -l. This will display as.. 18 5 * * 2-3 ksh $HOME/testScript.sh > $HOME/testscript.out3. Logged in a root and restarted cron deamon using "/etc/init.d/crond restart"As per my understanding now my testScript should start running at 5:18 am Thuesday
Either no error and it doesn't run (The scripts all run fine manually and in the cron.daily, cron.hourly, etc. directories.) If it does try to run it, it shows this: Feb 16 12:59:01 buddha crond[3638]: (*system*) BAD FILE MODE (/etc/crontab). Here's my crontab:
In RHEL6 root's account I have crontab job: 30 6 18 4 1 /sbin/init 6 It worked fine on the 18th of April and properly restarted my system, BUT it also restarted my OS at 6:30 on next Monday - 25th of April.
i want to Xampp run and start all time when i start pc. i don know correct command for use crontab when i want to use it i must delete ># m h dom mon dow commandor not ?
Cron seems to be running the script below (According to /var/log/syslog) but I'm not receiving the email it should send. This does work when I invoke it manually. checkraid (-rwxr-xr-x 1 root root)
Code: #!/bin/bash echo `/sbin/dmraid -s > /etc/check-raid/check_state` index=0 while read line; do
i'm new to ubuntu, i've only had it for about a month and i've had no problem putting audio or blank cds in, and suddenly, the icon stopped showing up and banshee music player wont show that there is a cd inserted. and i've looked around every where for a way to solve this to no luck so thats why i'm asking now.
I'm running ubuntu 10.10 and firefox crashes randomly. I t doesn't appear to be connetcted to flash or anything in particular. I attached a screenshot of the crash report abd here is what it said when I clicked the details button:
I have written a simple backup script, and added it to CronTab, but it doesnt execute at all. Here is my script: [URL]...And my CronTab entry: 0 */2 * * * root /home/server/Scripts/backup.sh
How to set crontab not send mail notification to the owner script if the script success running? because I'm monitoring mail server, and notification from cron is not necessary for me. I'm using ubuntu 10.04 server
is it possible disabling a crontab job without deleting the crontab description entry (by crontab -e)?I could also accept to change the entry itself. Now it's:0 0 * * 0-6 /home/me/cron/script.csh
I'm under CentOS 5.2 with PHP 5.2.10 I compiled PHP from Source with ./configure --with-config-file-scan-dir=/etc/php.d but PHP doesn't start modules A PHPInfo says:
Scan this dir for additional .ini files (none) additional .ini files parsed (none)
routes are correctly setted and the user have permissions for read the directory, just to be sure i changed the CHMOD to 777 but nothing.
I`m using Ubuntu Server 10.10. The system randomly restarts each day.Here are the logs: http://paste2.org/p/1499309Everything seems normal except that UDP: short packet thingy but even Google didn`t help me to find what`s thatBy the way, I think reboots started after I configured this PC as a gateway between internet and my local network
As root, I use crontab to run mirrordir to backup directories. Everything gets copied over properly, but owner information isn't preserved and root is the owner of all the backed up files. I can deal with that, but crontab reports tons and tons of chown/chgrp errors for mirrordir every time I do back ups--which is every day--and the multiple emails to root of thousands of chown/chgrp errors is very annoying. The error is "Operation not permitted," but that doesn't make sense to me because the job runs as root (right?) and clearly the job is permitted to create the backup files, so why would it fail to chown and chgrp?
I've had the exact same setup on another server for years, and crontab has always run mirrordir without error. Any suggestions how to clear the errors on my new server?
When I SSH another machine and try to execute some GUI app, it does. But when I try to execute some app as root on a remote PC, it doesn't do and says:Code:X11 connection rejected because of wrong authentication.some-GUI-app]: Cannot open display:How do I make it work?
On a whim vsftpd seems to stop working randomly and any attempted connection times out while waiting for the welcome message.It'll take a couple reboots and a couple "sudo service vsftpd restart" 's and it'll eventually start working again.AFIIK everything is at the most currant version. It just started doing this a few weeks ago.Here is my config file:
NFS share is up. it's set to r/w. This is Ubuntu 10.10 NFS4 i believe. exports file looks like this; /data/feeder 10.10.10.1/255.255.255.0(rw,async,no_root_squash,no_subtree_c heck)
on the client, /etc/fstab looks like this; 10.10.10.1:/data/feeder /opt/feeder nfs rw 0 0 when I ls -la into /opt/feeder I see all the persmissions as: 4294967294:4294967294 instead of root:root
Running a LAMP server, CentOS as the OS.The sites always been slow, but now that ive optimized it with mysql cache, gzip compression and some other things, its really fast.Except when pages loading seem to randomly 'time out'. The browser sits on 'waiting for x.com'. Closing the browser and/or the tab and opening a new one fixes it, but then it'll happen again eventually. Clicking further links while its 'waiting for x.com' does nothing, basically the site becomes unusable until you close the tab and reopen it.
This happens on all 3 virtual servers we're running within apache. Mainly noticable on the PHPbb forums, probably because they are visited the most.It's not a slow mysql query, i turned on slow query logging over 2 seconds, and the only two hits i got on that i know are unrelated.Ive turned off some optimizations thinking they might be it, but no dice.
I have a Dell PowerEdge 2850 running Ubuntu 10.04 Server with SSH and Samba. My problem is that I am unable to execute my adduser.sh script which reads from a text file and adds users to the box and Samba. I have ran chmod a+x to make it executable and placed it in /usr/local/bin.
When I run sudo adduser.sh I get "sudo: unable to execute /usr/local/bin/adduser.sh: No such file or directory".
When I run adduser.sh I get "-bash: /usr/local/bin/adduser.sh: /bin/bash^M: bad interpreter: No such file or directory " I have been using Kubuntu as my home workstation for some time now but I have managed Windows servers but since I was given the freedom to setup this server I chose Linux.
how to bind a script to a F key (F12) that will run as root even when not logged in. I have a headless server on client premises where it'd be easier for them to press F12 to run this script that will be rarely needed than to give them SSH instructions etc. I know this must be do-able, but I can't get my Google-fu on for this question. The only way that I can possibly think of doing it is to touch a file whenever that key is pressed and have the script idly checking for that file every few seconds in a loop.