I've installed PostgreSQL on Arch Linux & also self generated self signed certificates in /etc/ssl/ directory. My PostgreSQL 'data' directory is /var/lib/postgres/data & I've edited my postgresql.conf file to use SSL however I'm having permission / access problems starting my database using SSL. It can't access the certificates and errors out when I try and start the database engine:
Code:
LOG: autovacuum launcher shutting down
LOG: shutting down
LOG: database system is shut down
FATAL: could not load server certificate file "server.crt": No such file or directory
code....
I don't know what I need to chown or chmod in order to get PostgreSQL to access my self signed certificates.
I run a web server on Fedora 12, principally using Apache, MySQL, and PHP. I host a variety of sites, one of which is a family website that contains semi-sensitive personal data for several hundred extended family members, who all have access to the database-driven site.
Until now, I have been using a self-signed SSL certificate to encrypt the data as it is read and written back and forth from my database. Family members have simply had to put up with clicking past certificate warnings as they enter the site, as most browsers flag self-signed certificates as bad. It hasn't really been that much of a bother, but I'd love to do it more professionally. I have looked into buying SSL certificates, but it's a site I host for free and would rather find a cheap or free alternative if possible.
So I'm just fishing for ideas to work with. What are some alternatives to using SSL certificates for moderately strong website encryption? So far, I run only one host on the domain, but may eventually need encryption that would support multiple hosts. Or does anybody know a way to make self-signed certificates work on most popular browsers without being flagged as suspicious?
i am using red hat5 n i want to create X.509 certificates for ipsec vpn help me in creating certificates, not able 2 create certificates guide me ehere is the location for certificates.
I'm trying to set up a 2nd SSL cert on a different domain on a server, each domain has its own IP address, the problem is the Web developer that configured the first domain specified ssl keys for the primary domain in both the vhost config in httpd.conf AND in the ssl.conf config files. If I attempt to remove the keys form ssl.conf the server will not start up. and with them there It will not start up if I specify keys for the secondary domain.
I have installed Ionix vCM onto a Red Hat Linux box. It correctly communicates with the collection server if I use the Ionix certificate. However, if I use a self-generate certificate, communication fails.
(1) How do I determine which PKI certificates are resident on the Red Hat box?
I have vsftpd running as FTP server on Ubuntu 9.04 jaunty. Login works correctly with password for local users (those with an login account on the server) and without password for anonymous.
I want to further tighten security by requiring local users to provide a client certificate. But even if I include "require_cert=YES" and "validate_cert=YES" in etc/vsftpd.conf, clients without certificate are allowed to login; require_cert seems to be simply ignored.
I run couple of sites on a virtual hosting environment and I am in need of adding additional SSL for a different domain name. From what I read on some forum topics indicate that SSL cert requires different IP address. meaning one cert for each IP. Is this true? If so, then I'm having some difficulties understanding the benefits of running virtual host if a server can't host multiple secured site through single IP. Any way to run multiple ssl site within virtual host environment. I'm hoping for a possible workaround.
I am having problems creating ssl certificates for use with openLDAP. Does anyone know a good centos tutorial as I am having problems finding ones by searching through google and the forums.
To clarify further I have a small network im trying to setup to use ldap for auth due to the size I figured using kerberos for auth would be a bit overkill.....
I have the server up and running fine however at the moment all auth is done by using clear text (which is fine as the network has no connection to the internet at current) however in the future it will so I am trying to use ssl however I am having confusing as which certificates I point to where in the slapd.conf file
My sendmail server makes use of the TLS_SRV_OPTIONS which is set to `V' meaning it shouldn't verify certificates. As a server, it doesn't and the {verify} macro shows "NOT" in the logs, showing that no certificate request was sent out.
Acting as a client though, and I'm talking both about the server acting as a client towards other mail servers and about the local mail submission agent, it always verifies certificates. My mail submission agent when contacting my own mail server verifies the mail servers' certificate and still, the mail server has not initiated any exchanging of certificates since it still says "verify=NOT" in the logs (whereas the same entry for the submission agent reads OK or FAIL depending on what I use).
So, does mail servers ALWAYS send out its certificates and when they do, the "client" in question (no matter if it's the mail server acting as client or the mail submission agent) validates it because the TLS_SRV_OPTIONS setting just applies to when it's running as a server, or is there a setting to tell Sendmail not to send out certificates since you're not in the business of certificate verification relaying anyways?
I recently moved into a new place and when I hooked up my webserver, I wasn't able to bring up my page, even from localhost. With some digging, it seems that I can't access the database that housed my posts (wordpress installation). I looked for the datadir in MySQL and that directory shows the wordpress directory that should be holding the database and all the files are still there. 1) why the database no longer shows up 2) how to restore the database from the files?
I've recently been asked to setup our FTP server to accept connections from a remote host. They sent me a file "id_dsa.pub" with instructions to add this key to the xfer user.
I have a Server with Webmin, Usermin and Sendmail using pop3s. I have created a seft signed certificate using webmin. Exported it and imported it to the trusted root certification authorities on my client. This fixes the warning message from internet explorer when attempting making a ssl connection to webmin. When attempting to use usermin or retrieving mail I get that warning that this site's certificate is self signed. I look at the certificate and its not the same as the one I created with webmin. My question is. Is possible to have the same certificate be used by each?
I have one physical dedicated server. The name of the server is 'mail.iamghost.tld' which is obviously my Postfix mail server for my users. Now I generated SSL self signed certificates with 'OpenSSL' which is for 'mail.iamghost.tld'. I also have Apache installed on the same server to access my webmail application. I created a pointer record for 'url' to point to the same static I.P. as 'mail.iamghost.tld'. So my question is if I also want to encrypt site login's for url, do I need to generate a unique SSL certificate for 'url' or can I use my existing SSL certificates that are assigned to 'mail.iamghost.tld'? It's the same server but when people browse to my 'url' site, I don't want there to be an issue with the certificates saying it's for 'mail.iamghost.tld' when they're really communicating with 'url'.
I'm trying to move a mysql 5.1.50-community InnoDB database from one location to another.When starting the service I get:Starting MySQL.Manager of pid-file quit without updating file [FAILED]I've searched for a way to do this but I can only find people who describe what I've just done.
I'm having trouble with my database server run with PHP & MYSQL; i just installed this system on a SUSE Linux Enterprise Sever; the Server its self is an HP Proliant ML370G with 4GB RAM The system was fine for like 3 weeks until recently when it started slowing down. Apparently, eveything is normal when you are accessing the server, for example when you are browsing on the database web interface; but problem is when you start working in the database, when you try submitting into the database, its really slow on processing, i have to first restart the mysql; Could it be RAM? or the SQL itself?
I'm looking for a way to put a MySQL database in a server's memory. The disks aren't fast enough to keep up with the usage and I don't feel like going for a splitted web&db server yet because of the costs.
Because this involves risks (unless there's a way to read from the memory and write to the memory AND disks?), so I'd prefer that the DB gets copied automatically every hour or so to the local disks.
I am now able to create the database using PHP and SQL. But it seems I can only do it as MySQL root user.. $dbuser = 'root'; but is there a way to do that as a regular MySQL user?
am using zabbix open source solution for systems monitoring. I am facing a problem and discussed it on zabbix forum. my post was as "My zabbix server is behaving abnormally, approximately daily from 9 to 12, the server stop accumulating logs. I observed that the server report is RUNNING but it did not accumulate log values and also the machine have no extra load. Its shown in the graph image attached.t the following reply,"database performance?are you monitoring database IO and available database threads? "So any one have any idea that how can I do this as I am using MySQL as backend database on RHEL 3.
I switched from Windows 7 to Linux and when I'm trying to import my database i get error 1044. It used to work when I was using Windows 7. I was able to export the database from my localhost. However, when I try to import it to the other server I get this:#1044 - Access denied for user ' '@'%' to database'.I have been trying for hours to upload my site from the localhost to another server without any luck..I'm using phpmyadmin to import it.
i installed Fedora 14 on my machine,then i installed postgresql in it i started it and configured it after seeing link [URL] i am able to do su - postgres but when i am trying to create database in it am getting error,It is asking for password then i am giving my root password
createdb company Password: createdb: could not connect to database postgres: FATAL: password authentication failed for user "postgres"
I have suse linux 10.1 that came with whats looks to be a incomplete sql server. If it works I don't know how to use it because w3schools instruction guide does not work with sql. Anyways I was wondering if there's any database software for linux that could be installed in my webpage directory and used by html files. Or if theres any free cgi database software for linux that i could use with html to create database web pages.
I am having a problem restoring a database. Server is Ubuntu 8.04.4 LTS. It appears to wig out at a certain point, when this happens the output on the screen shows;Quote:
Well i have got a problem with connecting to a mysql database using php. I used this package "mysql-noinstall-5.1.44-win32.zip" and extracted it to c:mysql and added c:mysqlin to the PATH variable and used the following command in command prompt to start mysql: c:mysqlinmysqld --standalone
well mysql did start and everything went well accessing mysql in command prompt. Well under the database "mysql" in the table "user" i created a new user "php" with a password="****" and hostname = "localhost" but when i try to access this database with a following php routine:
And when i accesses it with my browser using localtost/test.php, all i get is "localhost" displayed on screen, "Connected to mysql" never gets displayed not i get any error..i am using windows 7(32-bit-ultimate)and mysql 5.1.44.
We have a disaster recovery solution for our database where a dump file is generated daily at 1:00 AM. We have tested importing the file, along with running another sql file that generates the appropriate database user accounts, into a database on a third-party server.
One question that came up was the following: Suppose the database crashes in the middle of the day, prior to that there were transactions that were entered into the database between the time the dump file was generated, and the crash occurred. We can restore the dump file to either the main or backup server. How can the transactions that were made between the time of the dump file and the crash be restored as well?
I am wondering whether it is problematic to let several different programs/php-scripts use the same MySQL-database concurrently, provided that they do not use the same tables?
I am running ubuntu server for my website.When i logon to my website I am getting this database crashed with this error messages:
DB function failed with error number 145 Table './cmsmycompany/l2m_session' is marked as crashed and should be repaired SQL=SELECT session_id FROM l2m_session WHERE session_id=MD5('7fbe5d447d34103616a0c38c3212183f')