I am trying to figure out ho to setup EC2 clustering. I am seeing heartbeat as an option. I need something that will monitor the state of a daemon, and not if a daemon is running. For example, say I want httpd to be high availability. Does heartbeat monitor if httpd is not in a hung state, or does it simply do a ps -e |grep httpd? So my question is, how does heartbeat work?
I�m kind of newbie trying to figure out a LAMP setup that will require as little maintenance as possible, and still be reasonable secure and maintained.After lots of googeling i think CentOs 5.x looks nice, the only problem is that i need PHP5.2.13 or later.If i add it from some 3 rd party repository it looks easy to install but what happens when there is security patches needed etc?I don�t like the idea of compiling it myself, I would probably get it running but there is always lots of do and donts that im unaware about to get it �right� not only �running�.
My current project environment setup is having a single server, running on RHEL 5.2, that is constantly receiving incoming data (video and text) over a periodic interval e.g. every 30 minutes. Initial in-house testing projected the server will be generally busy, so we decided to incorporate a second server for load balancing purposes. So now, server A and B will need to be clustered. Once that is done, incoming data will balance out between the two server (or at least that is what I will like to achieve. Note, I'm aware that at the switch side, I'll need to do some additional configuration and that part is covered).
I've been reading on Red Hat Cluster Suite and the Linux Virtual Server (LVS) seems the way to go. However, I noted that the LVS solution require at least a two-tier solution, and that would incur 3 additional servers instead of just 1. So here's my questions:- I looked around and probably know the answer, but I'm gonna ask anyway. Is there a one-tier solution for LVS i.e. have anyone tried or whether it's even feasible. From my reading, it don't seem so but just want another opinion. Is there any other way for me to do the clustering (for load-balancing) without LVS?
Sidenote: I'm currently looking at Ultra Monkey and will be trying out in a while. However, the project I'm doing would be rolled out to live site eventually, and my customer is kind of....particular. I'm just wondering if there's a software/application (that need to be purchased) and comes with support.
I added the "@clustering" and "@kvm" keywords to my ks.cfg file but during installation, an error about not being able to find either of these packages popped up and it wasn't installed.
I do see the Cluster and VT directories in my redhat_es5.4 directory along with the Server directory. The rest of the files install just fine.
In doing some net and forum searching, I find a reference to a base.repo file that lists the directories but I'm not sure if it's related to creating a yum repository or if not, should it have been created in the redhat_es5.4 directory.
While I've built kickstarts for several years and am comfortable with the file, this is the first time I'm working with rpms outside the main Server path.
Having issues installing Clustering Software on server: rhel-x86_64-as-4-cluster I am running the following kernel:
The server is up to date with updates, but I am having trouble installing the Clustering software and don�t know what to do now� # uname -r 2.6.9-89.0.25.ELsmp #
To clustering my web server so that it becomes Highly available. Its currently running php, apache and mysql with no redundancy in place. I want to be clustered so that if one node goes down then the second one will automatically take over. I have done that with HA in RHEL but I want to use centos clustering solution for this. Also how willl they share the web contents and mysql database among two nodes I have no idea. I have managed a iscsi target to store contents but it doesn't work as expected.
I'm looking to set up a clustered mail server, I kind-of know how I'm going to do it but wanted to check if there was a better way. So we have 3 mail servers, running as EC2 instances on Amazon AWS. We were going to achieve clustering by giving all three a shared EBS storage device to store the mail. The mail would be received by any of the three servers (Via postfix) and could be retrieved from any of the three servers (via dovecot). For receiving mail (SMTP), the domains would have 3 MX records pointing to each of the servers but for sending and retrieving mail (SMTP and POP3/IMAP) the three servers would have one DNS A record with 3 IPs associated (I know when using this method for web-servers, the load gets distributed among the IPs under that record but I'm not sure if this will work for SMTP/POP3/IMAP).
What we want is to have 3 servers that share the load equaly but are completely redundant for all services (POP3, IMAP and SMTP). We also need to be able to scale upwards so if we need to add more servers we can do easily. Also the servers must be perfectly synchronized at all times.
I'm having issues getting apache to respond to requests outside of my local LAN. If I goto my server [URL].. it says connecting... but never finishes and returns anything. I'm using Ubuntu Server 10.10.
a) The DNS is working fine. It's pointed to my cable modem's IP and ping responds fine.
b) The apache server is setup and is working locally. In fact, if I use w3m and goto [URL]..I reach the test page perfectly. I can't figure out where the missing piece is to close this gap. Here are some config files to illustrate my setup:
I'm running several SHOUTcast server instances and a WowzaMediaServer instance on a CentOS machine. I'm experiencing a memory leak problem, but I can't figure out which processes are eating memory.
TOP command reports as follows:
[Code]...
Something misterious to me (I'm still a Linux newbie) is that TOP reports a total of 7.5GB used ram but very small percentage for single process (0-1%). Memory consumption starts at 1GB/8GB after reboot and in three days running gradually increases up to 8GB. I'm practising with Linux, but I still miss a lot to understand what's happening on my system. For instance, are there linux kernel logs saved somewhere that I can look at?
I think I can eliminate Media Companion as the problem since all other samba servers work with it.
I want to trouble shoot this but don't even know where to start. How do I figure out what makes OPENWRT samba server different from the other 100% working FREENAS and PC-01 Servers?
We are trying to set up a NIS server on a CentOS system. We need to have a NIS server which can provide NIS authentication to a couple of clients. We are practically new to all this stuff.
Just googled to find some ideas about installing ypserv and ypbind and portmapper. We did all that and also started them successfully. But now the clients are not able to join to the NIS domain . The error log states "YP_DOMAIN NOT BOUND".
I guess we have not entered the /etc/yp.conf, /etc/hosts files properly. Please let us know the detailed steps to setup a NIS server .
Also, please let us know what entries should go into the different /etc/<file_names>? What is meant by HOSTNAME in the /etc/hosts file?
Is there any other files which need to be changed? Are we missing any steps?
Also to add-on, while executing the ypinit command we faced the following error:
At this point, we have to construct a list of the hosts which will run NIS servers. localhost.localdomain is in the list of NIS server hosts. Please cont inue to add the names for the other hosts, one per line. When you are done with the list, type a <control D>. next host to add: localhost.localdomain next host to add:
I'm trying to get an HTPC going, but I don't have any money to purchase a new computer. I have 2 desktops in my possession that I am trying to use: 1 is a 2.4 GHz with a 128 MB Radeon 9800 PRO video card (I think) and 1.5 GB RAM (the system is about 8 years old, but I built it top of the line for back then). The second system is a 3.2 GHz dual core with on board video and 2 GB RAM.The problem is, neither system will run 1080p video's without dropping hundreds of packets producing very choppy audio/video. I was wondering if it is possible to cluster these two systems together to harness both their processing power to run 1080p video? I would just jump in and attempt it, but as I have been reading, it looks like processes aren't actually shared across PC's, but auto determined which PC it will run on based on load, which is an issue since neither can cleanly run the 1080p video.
I want to setup a FAI server for which I was looking for the best method of mirroring the Debian Lenny. I want to setup a local mirror with the best method available for mirroring. If it is ftpsync, please provide me some best ways of doing it. I tried ftpsync mirroring but that was not getting properly working due to insufficient I want this mirror to be accessible in my FAI setup so that I can start the installation on multiple machines and start the updates and package installation to be done from the same local mirror.
I want to connect two systems in clustering concept. am new for clustering configuration. I have installed ubuntu 9.10 server edition in two system. what do to the next step to configure clustering in ubuntu 9.10 server edition.
I am looking to build up a HA/LB linux cluster with specific software requirements. For hardware, I have a number of dual xeon PE 2650s and would like to use them as efficiently as possible. These are 32bit systems, I anticipate scaling up to 64bit systems when I have a tested, working solution in place. For distro, I am familiar with CentOs, Gentoo and Ubuntu but unsure as to which would be the best foundation, although leaning towards CentOs. For software, I need to realise all the services provided by xampp (Apache, MySql, PHP, Perl, FTP), plus Red5 flash media server.
My current train of thought is; 6 physical servers; 2 Directors/Heartbeat, 2 Apache, 2 Red5 Gigabit private network for connecting the nodes. CentOs 5.5 on all nodes. DRDB across the 2 Apache nodes for Apache, MySql, PHP. DRDB across the 2 Red5 nodes for Red5.
I am Working On Citrix Xen Server.I have Installed two Virtual Machines(Centos 5.3).Now Apache is Configured and its running on the First VM.Can I Set up a Apache Clustering On those VM?.My Aim is "If Apache On the First VM Down,then Apache on Second VM Should Automatically Start".Is there Any Tutorial to Setup Apache Clustering On Virtual machines.
We have zabbix Network monitoring tool installed on two servers 172.17.11.6 ( Master ) and 172.17.11.3 ( Slave ) RHEL 5.4 Servers with 172.17.11.4 being Virtual Ipaddress.We are trying to implement High Availability with Red Hat Cluster Suite.OS level Clustering has been implemented with following cluster configuraiton.
But we need to implement Application level clustering for Zabbix_server process which inturn depends on httpd and mysqld deamons to be running.So I have to check the health of mysqld and httpd with a shell script along with the health of zabbix_server process.Any good tutorials for this ? Any guidlines that I have to follow ?Here I may need to take care of following things when any one of the process goes down
1. Shifhting Virtual IP 2. Shifting /dev/sdb1 ( Shared drive to slave ) 3. stopping other services on master 4. starting all the services on slave
I have a load balancer with 2 web servers behind it. The web servers rsync with cloud storage to update their apache directories 1 time every hour. Apache is just running php pages that pull/push data to a DB so they dont need to be updated that often. However I need to figure out how to implement a Master/Master MySQL setup to have my web servers point to for the PHP stuff. I need to implement it without having a single point of failure. The Load balancers are useless for failover as they only detect availability based on Ping request. So putting a master/master setup behind a Load Balancer is out. what is the best way to setup the master/master mysql in a HA setup without the use of a load balancer provided by the host?
I'm trying to setup RAID 1 on a CentOS 5 server for a zimbra email server.I get a partion schema error. Can I do this?The server is a HP Proliant ML150 G3 server with two 80GB HDD.
Is it possible to somehow setup an ssh server that doesn't require a username,password or cert to login?I wish to provide shell access to a console program, which will prompt for a username and password.Encryption is essential though, and users must not be able to snoop in on each other
If you hadn't guessed it from my last 3000(ish) java-related posts, I'm a Java n00b writing a Java Applet for a work project. I got to the part was I was about to write the applet code that would send HTTP requests to my CGI scripts. But I read some paragraphs in a book praising Java servlets as better that CGI because they are easier to use and give much better performance server side. My server load isn't very big, though, and I was wondering if it would be worth taking the time to learn about Java servlets and how to set up the server side configuration on my Fedora web server.
I have a small home-office network. On that network I have two linux computers, one is a client the other a server.
On the server I have NFS Server setup and mount some NFS exports on the client computer.
On the server I have the firewall on and here it becomes a little tricky.
Since both the server and the client connect to the router the interface (eth1) is theoretically both an internal & external zone.
The router is commercial grade and therefore has a good firewall on it which is also setup. Therefore the firewall on the server is really more of a backup than a necessity. But that's fine, and by having the server's firewall on 'fail2ban' is able to work which I like to have working so I don't want to just turn off the server firewall even though I have good security from the router.
However, when I turn on the server's firewall, the client computer cannot see the NFS server when scanning for server -- done by: clicking on "Choose" next to "NFS Server Hostname" when adding an NFS share in the NFS Client in YaST. Clearly something is being blocked even though I have both "NFS Client" and "NFS Server Service" allowed in the server firewall. The Firewall config. files for these are below.
The Firewall configuration is pretty much "out of the box". That is I have the services I need opened up for the external zone, the other zones are left at their default which means the internal zone, although not used (i.e.: attached to any interface), is completely open.
The perfect solution I guess would be to setup my client computer to connect through a different NIC (perhaps eth0), make that the "Internal Zone" and therefore allow all traffic through to it while still blocking the server from the external zone. However, I cannot make that physical change to my network for now so I am looking for an in between (non-perfect) solution.
In this case I am guessing that means opening up extra NFS ports to the external zone so I have full NFS functionality. I don't mind this because like I said, the router firewall is the main line of defense anyway.
So, given all of the above could someone tell me what I would need to additionally open up in the server firewall to make the NFS server detection work on the client while the firewall was on. Or, if you have a cleverer/better solution without me changing my physical network that would be great.
Hopefully I have written this in enough detail and clearly enough so that all the parameters are clear but if not, feel free to ask me what you like and I'll try to make it clear.
Code: ## Description: Firewall Configuration for NFS kernel server. # # Only the variables TCP, UDP, RPC, IP and BROADCAST are allowed. # More may be supported in the future. code....
I want to set up the following server in open suse:dhcpopenldapnfs (to allow users to mount their home directories from the serverI started off with the openldap server. I configured it with dc=localdomain,dc=local as its domain. As the server machine has no internet. Though when I go to add a .ldif file with the following command
Code: ldapadd -x -D 'cn=Administrator,dc=localdomain,dc=local' -f /home/base.ldif -W It returns this
I want to set up moodle and need to give my computer some capability to act as a server. I am following the steps at [URL] although my question is not really related to moodle. Here is the problem: setting everything up to make my computer accessible from outside has worked so far. I got myself a static IP address using a dynamic dns server and can ssh into my computer from any other computer connected to the www. So,