General :: Setup A Cronjob It Only Runs If Logged In?
May 6, 2010
I wrote this little script and it runs very well, however when I setup a cronjob it only runs if I am logged in. I would like to have this script running even when I am not logged in. At the moment it runs but only when I am logged in. I logged as denis and in terminal (su root) and edited crontab as shown below, what do I need to change or setup to have this script run when no-one is logged in the computer. I run Fedora 12 (64 bit)
backup script:
cd /home/denis/Documents
tar -czvPf /home/denis/Backups/docsbackups_$(date +%Y%b%d_%HH%MM).tar.gz /home/denis/Documents
# above line will produce file with name in this format "docsbackups_date&time.tar.gz" in the /home/denis/Backups folder
I'm trying to setup a cronjob that needs to run every 40 minutes between 10am and 3.30pm.I generally used cronjob for simple configuration, but now I'm a bit lost.There is a way to setup cronjob for that configuration? (better if all in one line of code, not multilines).
I know crontab -e sets a cronjob in /var/spool/cron but how do I set a cronjob to run from /etc/crontab? Is there a command used for this or would I have to manually edit a certain file?
I'm working on a bash script that will be run regularly via a cronjob. As there is no easy way for the user to see if it's running in the background, I'd like a have some kind of pop up window what say's something like "Process started" and "Process finished" when it's complete. Ive read though some similar threads and came across xmessage. It kind of does what I want, with the exception of:-Id like the message Process started to stay up for the whole process, but it looks like the script wont continue until the xmessage window is closed. I cant get any xmessages to appear when the script is run from a cronjob. It only appears if I run the script manually from a terminal window. Which is no good for me. If anyone has any suggestions that would be great and just to add, it's doesn't have to be xmessage, any text box would do. I'm working with Redhat 4 and 5, if that makes a difference. Chuck Norris destroyed the periodic table, because he only recognises the element of surprise.
Currently we have to comment out an rsync task every now and then. Is there a way to have a PHP script on the server so we can simply browse to it to uncomment the rsync task and another one to comment it out again ?
The rsync job is the only job in crontab so even if it means deleting / re-creating the job wouldn't be a problem. I just want to avoid connecting via SSH and VPN every time I am asked to disable it
I am a PHP developer.I have always used XAAMP(in Windows7) for development purpose. I am driven to turn to Linux now . I have installed Ubuntu10.04-rev189(Wubi) in Windows7 recently. I have a PHP script which require the following tasks to be performed.
1. Give permissions "777" to the following files and folders:_cache media images /configs.php mages/config_photo_preview.jpg
2. Set up cron jobs on your site like this: /usr/local/bin/php -q /your-site-root-dir/cron/index.php I need to get an environment like XAAMP and perform the above mentioned tasks.
i wanted to ask you for help in setting up a cronjob, that restarts a game server.By now I made a script that kills the screen of the server and then restarts it.
i have a site where I need to run a couple of php scripts every day..therefore I was thinking to setup a cronjob to do this. Unfortunately I get a whole bunch of errors trying to do so.
I can execute the php scripts via a command line without problem, one example would be
I am new to bash scripting (not programming in general).
I am writing a bash script that will run a Python script I have written.
I want to be able to do the following:
Pass parameters to the bash script via the cronjob (so I can have two cron jobs) one to be run with parameter 'foobar', and the other 'foo' switch based on the parameter passed to the bash script (by switching, I mean an if/else based on the paramter passed to the bash script).
I would like to make a cronjob who makes a tag.gz of everything inside a directory in a recursive way. BUT there is a HUGE directory full of jpg's. I don't want this one in the backup.Additional points if it can backup symbolic links.
I am trying to import .exp file to oracle database. I have written a script for the same, but i am able to run it manually with out any errors , but it failing from cronjob(Even though i am using absolute path every where) Follwoing is the command i am trying to execute from shell.
logging in a server through putty in the same network when i executed last command its showing system ip logged in time and logged out time the output as followsthis is my system oot pts1 xx.xx.xx day month date time in time out timeand similarly am geeting other than this likeroot :0day month date time still logged in this is from more than 3 days its logged in
After I set to use some modules, say, pam_access.so, I want to know if there is a way for the administrator to check if any invalid attempt was happened and blocked by that module?
I installed Ubuntu Hardy Heron about six months ago, and it has gradually slowed down, and CPU utilization has crept up to the 50-60% range. The DEV folder contains a large number of similar files - TTY0 through TTYZF or PTYA0 through PTYZF. I suspect that something is not configured right, but, I don't know where to look.
I connect to the internet through a G4 Wireless Transceiver connected to my ethernet port on an IBM Think Center 8187.
I want to run a cronjob every 15 minutes that checks a directory for files. If the directory contains more than ten files I want it to send an email to me.
All I have is this...
*/15 * * * * ls -l | wc -l | [filename] | mail -s "This is just a test" [email address]
I would rather not write a bash script. Is there an easier way to do this? I was looking into some commands like find and grep.
I run a linux file server for my office and we user SFTP for remote partners to login and download files. Is there a way to see if there are any active connections or logins so I can know when it is safe to perform maintenance on the machine?
Since the machine is almost constantly serving large files, scheduled maintenance is often bumped off due to someone either upload
I am trying to login to a redhat server via VNC. This used to work until I reloaded the box. Although I had previously logged in directly to the box, then I could vnc to it remotely. The service is running, netstat states the ports are open and listening.I can ssh to the box, and ran the usual commands to start the services.So my question is. Do I need to have a local user logged in before I can VNC, if so how can I do that via the command line.If this is not required for a local user to be logged in, what am I missing. Other than VNC, which other services do I need to start.
The problem is I need the php program to send member email confirmation which contains a confirm link. Run every min may still make the member wait. So I like to make it to run every 20 or 30 secs.
I don't want to put the code to send email on my sign up page as that's no good.
But I don't want to put a sleep 30 sec on my php script and going on loop. If it failed in the middle then it may wait abit to start.
What can be done to achieve my goal and what's the best way?
Making a php script to run as a daemon process? Is that possible and okay?
Once I deleted a log file and recreated it. After I did so the new log file was not being written to. I was told that it is because the file I deleted is still running and being used in memory. What is the memory portion called(such as user space) that the filesystem run in?
I am attempting to recover a deleted text file. I used dd to make an image of the sectors on the hard drive which contained the data. Now, since I am not getting good results with foremost, and know all of the lines I'm looking for contain "Style", I want to grep the .img, but when I do it runs out of memory. I have tried the grep option -D, set to skip, and tried adding a 3GB swap to account for the 2.7G image. It still "exhausts" it's memory, and it seems to happen really quick now.this is the output of ulimit -a
I'm using CentOS 5.3. After booting up, where can I find the log file that contains if all services where successfully loaded or not? For example when computer boots you get a list of start services and they can be OK or FAILED. Is there a log file where this information is kept? I had a look in the following directory /var/log/ but not sure which one will contain the informaiton that I need.