Ubuntu :: Script Fails In Cron But Runs Fine In Regular Shell?
Jul 11, 2011
I have an Ubuntu server running Couch Potato, Sick Beard and Sabnzbdplus. Everything "works" pretty well in a sense that CP and SB push the NZB's to Sabnzbdplus, but Sab crashes regularly (haven't found the solution or the cause for this problem, so if you have some advice regarding that, it's welcome).To counter this problem (Sab crashing) I have a script written which checks if Sab is runnning and if it isn't start it:
I have a cron job that runs a shell script. But it only runs the first line of that shell script and not the rest of the file. I'm a little stumped as to why. If I run the shell script manually, it runs and executes every single line as it should. I think I must need some additional syntax to make this run correctly?
Here is the crontab ...
Code: root@kchlinux:~/macs# crontab -l # m h dom mon dow command
Ubuntu 9.10 Wine 1.1.39 ATI HD5750 Catalsyt 10.2 (propriety)
For whatever reason, Wine won't run any games (though I tried only two), but runs MS Office, Notepad perfectly fine. When I click Borderlands form wine menu the computer is loading something for a while and then stops. When I try Civilization 4, the game starts loading and crashes immediately after saying "Init Engine" on the screen.
If I run them from the console, I get the same error for both:
Code: wine Borderlands.exe fixme:heap:HeapSetInformation 0x4296000 0 0x36ffda8 4 err:ole:CoGetClassObject class {9a5ea990-3034-4d6f-9128-01f3c61022bc} not registered
I was able to run sshfs successfully, but I am not able to access the mounted folder. For example, if I execute the following commands, I will get stuck at the last step "cd folder" forever.
username@username-laptop:~$ sshfs username@IP_address: folder username@IP_address's password: username@username-laptop:~$ cd folder
similarly, if I try to do a "ls" in the parent folder, it will also get stuck.
I just installed Arch (x86_64). I am trying to setup Xorg. I have not installed a DE or WM yet, I just want to make sure Xorg is 100% first.
I had to edit the xorg.conf myself to direct it to my Intel graphics driver, but now it seems to work perfectly.
Except after exiting X, my terminal reads:
Code: xinit: connection to X server lost
waiting for X server to shut down xterm: fatal IO error 11 (Resource temporarily unavailable) or KillClient on X server ":0" XIO: fatal IO error 4 (Interrupted system call) on X server ":0" after 1011 requests (1011 known processed) with 3 events remaining. xterm: fatal IO error 104 (Connection reset by peer) or KillClient on X server ":0" I'm pretty new to Linux and have no idea what this means! But I'm assuming it's not normal, because it says 'error'.
I have had slack 12 on this computer for quite a while with never a problem. Slack 13.1 was installed a few weeks ago without a problem, it seems to run forever as long as it is root. When I go to user it will crash 2 or 3 times a day. When it crashes it is a total lockup that only the off/on switch will reset it. There is no certain time or thing that causes it, it will just lockup while surfing using the command line switching or opening a new le. Any suggestions will be appreciated! I have googled and searched with no info found.I do have 13.1 on another computer that works fine, it has an nvidea video card in it and it is 64 bit.
I am trying to migrate an XP Laptop to FC 11. I downloaded the latest KDE-Live-x86 image and burned it successfully. When I boot with it however, I get the splash screen and no further.Any option I choose, ncluding the CD diagnostics and Memory Test result in the following:invalid compressed format (err=1)--System HaltedI'm certainly no expert, but am not a linux noob either, yet am at a total loss.Google has yielded many theories, but no solutions. It seems that this error is not exclusive to FC 11 either. Older fedoras, as well as non RPM distros seem to be intermittently effected as well.I have verified the MD5 of the iso, and have verified the cd against the iso. The disc gives the same error in another computer. I cannot access the on-disc diagnostics. XP runs fine.
I have a script that is basically a series of rsync commands called bkup_all.sh. This script is located in the /root/ dir.From the command line (su'd as root), I can run the script like this:/root/bkup_all.sh > /var/log/bkup/bkup_$(date +%Y%m%d).logThis excecutes perfectly, and all the rsync adn script output is saved in a log file in the intended destination. However, I want this command to run automatically, so again, su'd as root I enter:crontab -ethen enter the following:00 02 * * * /root/bkup_all.sh > /var/log/bkup/bkup_$(date +%Y%m%d).logI want the script to run each night at 2:00am.But, the script does not run. There is no log file generated and I do not see anything in the syslog or system messages to indicate an error.
I have a cron job that I'm running once per minute.don't want to have the /var/log/crond.log get updated 60 times per hour. How can I suppress the logging of the job?I've tried adding the following to the cron line but they just get logged right along with it!
The problem is I need the php program to send member email confirmation which contains a confirm link. Run every min may still make the member wait. So I like to make it to run every 20 or 30 secs.
I don't want to put the code to send email on my sign up page as that's no good.
But I don't want to put a sleep 30 sec on my php script and going on loop. If it failed in the middle then it may wait abit to start.
What can be done to achieve my goal and what's the best way?
Making a php script to run as a daemon process? Is that possible and okay?
I am trying to write as bash script in order to have backup done by cron on the webhosting server. I want all backup runs to have incremental number in front of them. I came up with an idea to store incremental number of a backup in txt file on a server so when next one runs is can check the number in the file. However I am having terrible issues. My script:
I am running Ubuntu 10.04 on an AMD64 computer with an ATI Raedon x1250 graphics card.
Everything runs fine for about 3 hours, then it will randomly stop working.
Symptoms: Mouse stops moving for 10 seconds. Screen goes white. Speakers start repeating last 10 seconds of whatever was playing. Screen goes orange or purple with lines across it and random symbols...
At that point I hardware restart my computer and all is fine again
Within a VMWare ESX virtual machine, I am running CentOS 5.2. (Actually, it is kind of a virtual appliance to run CollabNet's Teamforge - which I have installed for a trial). I've been dabling with Linux for a year or so, but I know I have much to learn.
I'm attempting to run a cron job that runs a backup script at 11pm. It works great, but unfortunately it runs at 11:30 am.
I created the cron job using 'crontab -e', while logged in as root. My cron job line is : 0 23 * * 1,2,3,4,5 /etc/tjt_backup/collabnet_backup.sh
If I type 'date', I get the correct date/time in my timezone: Tue Mar 9 16:27:12 CST 2010
If I type 'clock', I also get the correct date/time: Tue 09 Mar 2010 04:26:57 PM CST -0.463330 seconds
My .jar file needs and uses some files in the same directory it's in (everything, including the jar was unzipped into said directory). It runs perfectly when I do java -jar file.jar in the command line, but there's trouble when I double-click the file when running from the file system manager. I've tried a custom command under properties ie java -jar, but the problem is that the .jar file doesn't seem to be able to use any of the files in the same directory. When running, the jar can't find any of the files that it needs.
i am running slackware and i cant set my terminal to regular shell. when i open up a terminal i see something like bash4.1 instead of hostname and nickname how can i change this. i use more than one terminal so id liek to make this change for all terminals
I like to keep my pymol molecular viewer as recent as possible. With svn and compiling it used to be easy, up until ubuntu natty. Now I get the following error every time I try to compile:
layer1/Scene.c: In function SceneClick: layer1/Scene.c:4557:11: internal compiler error: in set_jump_prob, at stmt.c:2319 Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.5/README.Bugs> for instructions. error: command 'gcc' failed with exit status 1
This is for EVERY version I have tried. I have even downloaded the source code you can find in the ubuntu repositories (current version for 64 bit is pymol 1.2r2) and tried to compile that one, just as prove of concept. I still get the same error. I think I might need something else beyond what I have already, because if the package maintainers could build the older version for natty, I should be too, with the proper packages, shouldn't I?
I want to run /etc/acpi/actions/blank.sh as a regular user, but it will only run as root. I am trying to setup a keyboard shortcut to run the above script without success. I can run the file blank.sh as root, but not as regular user. Basically I went to: System > Preferences > Keyboard Shortcuts, and added a shortcut to blank the screen. I used the name: Blank Screen, the command: /etc/acpi/actions/blank.sh, and the shortcut: XF86Launch1. XF86Launch1 corresponds to the extra "Access IBM" key found on my keyboard. xev confirms that pressing the "Access IBM" key gives the keycode XF86Launch1. I can launch other programs such as firefox using this method. Here is the actual file blank.sh:
The better method would be to get /etc/acpi/events/blank to accept a hotkey sequence, but this seems broken in Fedora 11. The file blank:
Code:
event=ibm/hotkey HKEY 00000080 00001001 action=/etc/acpi/actions/blank.sh acpi_listen for keys Fn+F1 reports: ibm/hotkey HKEY 00000080 00001001, but the above file is not being executed.
System:Tri-head, dual-card: GeForce 9500 GT, GeForce 8400 GS Dual-boot: openSUSE 11.4, Ubuntu Natty Driver: nvidia proprietary (260.19.44 in openSUSE, 270.30 in Ubuntu due to kernel version) xorg.conf: same for both Results: All three heads work just fine in Natty; secondary screen fails in openSUSE:
Code: [25.164] (EE) NVIDIA(1): Failed to initialize the NVIDIA GPU at PCI:2:0:0. Please [25.164] (EE) NVIDIA(1): check your system's kernel log for additional error [25.164] (EE) NVIDIA(1): messages and refer to Chapter 8: Common Problems in the [25.164] (EE) NVIDIA(1): README for additional information. [25.164] (EE) NVIDIA(1): Failed to initialize the NVIDIA graphics device! [25.164] (II) UnloadModule: "nvidia" [25.164] (II) UnloadModule: "wfb" [25.164] (II) UnloadModule: "fb"
But nothing is jumping out at me in the output of dmesg. I also don't see any additional system or kernel logs in /var/log. I'll google some more on that front. One other fun fact: nvidia-settings fails to run in openSUSE. Unless I launch it under gdb. Then it starts up and runs as expected. (And the second screen ain't there, as expected.) Here's (what I think are) the relevant items: Xorg.0.log - Pastebin.com dmesg - Pastebin.com xorg.conf - Pastebin.com Additional output available upon request.
I'd like to set up a shell script that will send various emails at regular intervals through gmail. I'm sure that there is some sort of text-based email client out there, but I'd like to do this with cron or anacron. I know a little bit of shell scripting but am definitely not great at it.
Note: I have made a thread similar to this before, but the title/contents were too botched to repair.I know that using C-r you can search for past bash commands containing a particular string, but how would you search for past bash commands matching a particular regular expressionIs there a keyboard shortcut for that or do you have to use a shell command?
I have a script that I would like to run using 'cron'. I want to use 'scp' to transfer files from one machine to another. I have set up the SSH keys on both machines. When I run the script from bash terminal, it works flawlessly. But when I schedule a 'cron' job to run the same script, 'scp' does not transfer the files.
'Return value' is 0 when the script is run from bash directly. But when it runs from 'cron', the 'Return value' is 1. That means, surely, that 'scp' is throwing an error. I don't know which error is being encountered. Could anybody let me know how to make it work?
I'm creating a script all worked fine in the command line. But not work in the cron. Below you could see the script
[Code]...
So far I found when I use corn following part not working, nothing goes to the processedfiles file. ls -l /var/lct/mou2/processed | grep $TODAY | awk '{print " " $8}' > /home/trans/mou/processedfiles ls -l /var/lct/mou2/processed | grep $YESTERDAY | awk '{print " " $8}' >> /home/trans/mou/processedfiles This work perfect in command line. Corn job and command line use by the same user.
If I pass to my shell environment as a regular user will it apply to builds ran under sudo?I posted a thread similar to this regarding a build with TOR; however, this is applicable to all programs.
Setup: 10.04 server with "bash" as /bin/sh When I run "ls -l" in a shell I get the following format:
Code: -rw-r----- 1 syslog adm 0 2010-06-13 06:53 /var/log/user.log Whereas if "ls -l" executes from a cron job the format is:
Code: -rw-r----- 1 syslog adm 0 Jun 13 06:53 /var/log/user.log Notice the different time format. Now I could fix this by changing the cron job to
Code: ls -l --time-style=+%Y-%m-%d %H:%M ... but I'm interested in knowing why this behavior occurs. What's different between the cron job and the shell?