I have been trying to set up ssmtp so I can send email using Gmail's ssmtp servers. However, when I try to send mail (using mailx), I get the following message:
Code:
Can't send mail: sendmail process failed
Here's the last line from dmesg (the only one applicable, according to the timestamps and message content):
Code:
[484114.608378] sendmail[17975]: segfault at 0 ip b7dbbbf3 sp bfb0dc4c error 4 in libc-2.11.2.so[b7d44000+14e000]
Here's my ssmtp.conf:
Code:
# # /etc/ssmtp.conf -- a config file for sSMTP sendmail. #
I tried the google and browsing the known bugs in the sticky before posting this thread.On boot the machine is pegged at 100%. It is a dual-core with 4GB of ram and is running 10.04, 64bit. Also, as time goes on it eats up all the ram till theres maybe 32mb free. What happens is there are multiple 4-10+ identical processes running. seem to be gnome related.The machine sits idle on my floor and runs samba and apache/mysql/gallery for my test website that no on but myself visits.
I am using webmin for my daily tasks. I have fedora 13, whenever I click on ''Sendmail M4 Configuration'' or Outgoing Addresses (generics)'' I get the following error message
Quote:
The Sendmail M4 configuration base directory /usr/share/sendmail-cf was not found on your system, or is not the correct directory. Maybe it has not been installed (common for packaged installs of Sendmail), or the module config is incorrect. I read documentation at sendmail.org, it seems that structure of directories for send mail has been changed in version sendmail-8.1.4 shipped with FC13. In webmin config module we have
Quote:
Sendmail M4 base directory = /usr/share/sendmail-cf
which is not there. I did a locate / sendmail-cf on the command line, it finds nothing
I have this server that runs Tomcat . this server sends mail with localhost as MTAthe local MTA is sendmail (with the default settings) . from time to time i have this strange thing and the emails it sends never reach the destination. the log shows it left the server , but looking at the logi see that the time on the log is wrong . sometime its correct and ometines its +2 hours. i guess this email are bounced at the destinationfor sending at future time . all this email that didnt reach the destination have this in common
I have a cron job that executes a rake task in rails. I noticed in the log that it was running the task 4 times everytime it was executed. The problem is that there are 4 instances of cron running.
I ran code...
So my question is how do I stop the other instances. When I run the stop command now I get this:
Stopping crond: cannot stop crond: crond is not running. [FAILED]
Any ideas? Do the other instances have different names? Is there a way to kill all instances as once?
In windows xp, i created another user account. Now i can run a second instance of any program by right clicking on it and selecting Run as. Is such a thing possible in linux (centos or ubuntu) in graphical environment ?
I recently modified sendmail.cf to use a third party SMTP server to send emails. It works great. But when I run sendmail from the command line, I have to specify the -C flag and force feed it the location of my sendmail.cf, or else it doesn't work.
So in other words, the following works great:
However, if I don't specify the -C flag, sendmail doesn't consider what's in the sendmail.cf and barfs:
I don't run sendmail as a daemon. I'm only using it to send emails. I know my modifications of sendmail.cf are correct because it works perfectly when I use the -C flag. I searched my disk to see if I could find another sendmail.cf on the machine and only the one in /etc/mail came up.
Why sendmail is not reading my sendmail.cf?
I'm running Sendmail version 8.14.2 on Fedora Core 8.
I wrote/cobbled-together this nifty sendmail script to read some logs and take some disk stats. basically i'm reporting on rsnapshot. When I run it as
Code: sudo /etc/rsnapshot/mailSta.sh Everything works wonderful and the emails arrive as expected and fires off an email to two accounts at a remote server
I have installed postfix and dovecot on my server and thought postfix will not only take SMTP connection from my e-mail client like Outlook, but also handles "mailx" commands from the server. However, it looks like sendmail is still responsible for sending mails from "mailx". I tested this by turning it on/off using "service sendmail stop" and "service sendmail start". Mails sent using "mailx" will only be sent when sendmail is up. When I did "yum info sendmail", it lists sendmail as an installed package. Is is safe to remove sendmail by running "yum erase sendmail", and let postfix handles "mailx" also?
How can I have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)?
Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it's not comfortable (see other question).
The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated.
On Windows If I run Firefox as user jack, and then try to start another instance of firefox I will be unable to, as one is already running.If I choose to run firefox as administrator,then I can have two instances of firefox, separate from each other side by side, because they are under different user accounts.This does not seem to be true on Linux.As user jack if I start firefox, like on windows I am unable to start a new instance.If I open a terminal and change to root, set XAUTHORITY to jacks .Xauthority and try to start firefox as root....I get the error that firefox is already running.
Okay, this issue is kinda difficult to understand without context: When we run Evince, it checks if there is any other instance running. If there is one, the evince command exits immediately, right after passing the parameter to the running instance If no other instance is running, a new one will be started, and the evince command will wait until this new instance exits. While that behavior is quite nice, it is not helpful for shell scripts. Why? Because I have a script that writes a temporary .ps file, calls a PS/PDF viewer, and automatically deletes the temporary .ps file after the viewer exits. Unfortunately, this script only works if evince was not previously running (if evince was running, then the file is deleted too fast).
I don't want to add extra complexity to this script. It should be kept simple, because I may want to replace evince with xpdf, gv, or anything else. I was expecting some kind of command-line parameter to evince (similar to -f to vim and gvim), but I fear there is no such option. Writing a wrapper script around evince might be a good solution, but this script should work correctly in all cases (if evince was running and if it wasn't).
I'm using Ubuntu 11.04. What command can I run that will shut down all Firefox instances?Here's what I get when I scan for processes with "firefox" included in them
I have a line of text with multiple web links in the line. I'd like to replace the actual links with the text "<web-link>" so I don't accidentally hit them while reading on my iphone. I've tried many versions of the following sed command, sed 's/(http.*)/<web_link>/g', but it simply replaces everything between the first instance of "(http" and the last instance of ")" with <web_link>, or does nothing at all.
Ex: This line has a link to a web page (http://www.webpage.com/file.html) then some more text (extra text) and then another link (http://www.nextwebpage.com.index.html) to a website. $ echo "This line has a link to a web page (http://www.webpage.com/file.html) then some more text (extra text) and then another link (http://www.nextwebpage.com.index.html) to a website." | sed 's/(http.*)/<web_link>/g'
What I get is: This line has a link to a web page <web_link> to a website. What I'd like is: This line has a link to a web page <web_link> then some more text (extra text) and then another link <web_link> to a website. What am I doing wrong with my sed command?
I need to run multiple squid instances on my server , I am running squid version squid-2.7.STABLE5-1.el4 on RHEL 4.7 , kindly tell me how to do so. by the way , I need to run two instance because i need to configure my proxy to act as a reverse proxy and a forward proxy, and people told me that you cannot run a forward and reverse proxy on the same instance.
I have to run multiple instances of apache on the same physical machine, as we have different OAM policies for different domains.is in the httpd.conf file can I have ServerName same in two instances of apache, like
ServerName: prod_machine (actual machine name)
In the vhconf files I do have different servernames for virtual hosts. Apache instances are running on same IP but different ports. I am including various vhost files in the main httpd.conf file. Can I skip servername in the main httpd.conf file and include different servernames in the virutal hosts configs. OS: Solaris10
If i have, say, ..... tab opened and if a click on some of the related videos (while it's still playing) firefox terminates. The same if I open ..... on one tab and myspace on another - firefox just shuts down.
I want to know why this happens, is it a bug and how can i fix it ?
I have a machine with 8 cores. I have a directory of input files that I need to process using a non-parallelized program. I would like to write a script that works its way through a list of commands executing 8 commans (i.e., instances of this program; each with specific flags having to do with the current input file) until the list of commands has been exhausted. Is there an easy way to do this?
I currently have sendmail installed. It starts as a daemon but I want to avoid doing that. I want to start it manually.Also am a dynamic host.so every time i start my computer my ip changes. I use ddclient to update my records at dyndns.com.but how to configure sendmail in the case of dynamic hosts since it looks at the file /etc/hosts which contains information about the static hosts.
If I only want to let a user be able to login via telnet a max number of times equal to 2 how would I go about doing this?I have found this little tid bit:per_source = 2but that only allows 2 connections from the same source (i.e. network) and that would not work. For some reason our telnet sessions are not dying off after a user has shutdown their PC and then the next time they login it adds another telnet session.
I have a serious problem with my Ubuntu installation (in a VM). In virtually every case where I try to access the command-line (Terminal, logging in directly into shell), I am not getting the command line. I get a blank line, and anything I type in results in the following text being replied:
"WinSCP: this is end-of-file:0"
I know what WinSCP is, and I'm not using it at all. And the result is that I can't run any command-line programs or commands anywhere within the VM anymore.
Both the VM and the host computer have been rebooted multiple times. I actually hadn't edited anything in the VM in a while, and I came across this issue while needing to fix another issue...
Somehow I'd been locked out of my OpenSSH access on the host computer (Windows Vista), and while troubleshooting that, I found that someone was trying to hack into SSH with random password attacks. I haven't noticed any suspicious activity otherwise (no missing files, no malicious programs, no traces of a hacker) but after looking at the logs on the Ubuntu VM, it looks as if someone was trying to get into that computer as well. The only fishy thing I've noticed is the command-line shell issue on the VM.
So, now that I've turned off SSH and I'm trying to get Ubuntu fixed... what can I do? Did some hacker get into the Ubuntu VM and set up a shell redirect or something? Or is this a more innocent or easily fixed issue?
I have multiple hard drives with videos on them and i can only get the my xbox to read the 1st in my list of directories. So i wanted to know can i run multiple instances of ushare and use one for each hard drive? Is it possible?
I finished a program that connects to a remote server and does some stuff there for a while. What I want to do is to run several instances of the program to have different connections at the same time, and to be able to manage those processes automatically (for example, if the connection stops, detect it and reconect).
I guess I will have to use the pid of the processes and handle them somehow but I dont really know what to do because I haven't used c++ in a while and I never went that deep. All this will be running in suse, debian and/or yellowdog distros Maybe there's a tutorial or you guys know some tool that i can use,