I was following [URL] for the cluster setup. But things dint work out. One Doubt to ask "Do I need the services cman, ccsd and rgmanager one at a time on the both machine. I have been running it through script.
I am trying to build GFS2 cluster with 2 or 3 Fedora 14 nodes, but I've encountered some problems from the start. First luci does not work at all in Fedora 14. There is no luci_admin and even if I manage to start luci service, I get a blank white screen when I try to open it from the browser. I've googled a bit and found that I'd might be able to setup GFS if I manage to build cluster.conf manually and start the cluster suite, but I cannot find documentation on how to create cluster.conf anywhere. If anyone knows how to setup GFS2 without a cluster suite or how to configure cluster.conf.
I am working in a project that needs to set up an Apache Web Cluster. The cluster needs to be High-availability (HA) cluster and Load-balancing cluster. Someone mentioned the use of Red Hat Cluster Suite, but, honestly, I can't figure out how it works, I haven't been able to configure it correctly. The project currently have two nodes, but we need to support at least three nodes in the cluster.
In luci I can see that my cluster is green and the two nodes to. I have make an IP resource and associate it to a service : green : I can relocate the service from a node to the other one and the IP appears in the list of IP addresses
The problem is that I have made the same in order to configure tomcat and postgresql and it does not work...
I have just installed a two server cluster with ricci luci and conga on centos 5.6 32bit , both servers are vmware guests and have a shared storage disk connected to them both
with a GFS2 file system on them + fencing agents configured to work with VMware Vcenter.
(this is supported by vmware and works great on 4 other centos clusters i have been runing for 4 monthes with no CLVMD).
In this setup i used for the first time CLVMD as recommnded by RedHat so i could have the flexablitly of LVM under the GFS2 file system but , i have been getting some Strange problem with it , some times after a developer has done some IO heavy task like unziping a file or a simple TAR the load goes to 10 - 15 and no task can be killed , trying to reboot the server hangs.
After hard shutting the server every thing works ok until the next time some one does the same IO work as before.
i am using the redhat cluster suite (luci and ricci) on my centos 5.3. i have 2 nodes in a cluster. when i poweroff the first node on wich a vm service is running, the service switchtes to node2. so far, so good :) but when i restart node1 the service is not failback to node1! i have created a failover domain with both nodes and priorized whre node1 has prio1 and node2 has prio2.
This is my first post, and i have a question with the fence software for VMWare ESXi. The fence_vmware agent only works with ESX, and redhat (in you GIT repository) has submited a new agent called fence_vmware_ng that claims to work with ESXi. But the problem is that they do not specify the version that works with that. Anybody has test the fence_vmware_ng agent for VMWare ESXi 4.0 ?, i follow the instructions here: [URL]...and i can install the software, the API from VMWARE site, etc, but in the moment when i run the agent nothing happen, The agent connects to server, i see in logs, but the off-reboot-on operations not works. Only works status operations, that return the state of a virtual machine. I have CentOS 5.3 (fully today updated) with RHCS.
I have made a cluster between two server.In luci I can see that my cluster is green and the two nodes to.I have make an IP resource and associate it to a service : green : I can relocate the service from a node to the other one and the IP appears in the list of IP addresses The problem is that I have made the same in order to configure tomcat and postgresql and it does not work...I put my configuration only for ip and tomcat:
I am using the Redhat Cluster Suite (luci and ricci) on my centos 5.4. i have 2 nodes in a cluster.I had clustered an apache server.The service is up end running and i can stop,start and switch on all two node.The problem is when i try to simulate a fault for one node.For example:The apache resource stay on the first cluster node.If i power off the first cluster node (not halt or init 0 but take off the eletric power off), the second cluster node not take the resource.With the clustat command, the service still running on the first node.But the service is down. The first node is dead.Only one the first node is join again the cluster the resource goes up on the second node.
I've tried many, many different ways of setting up the cluster.conf file, but every time I start the cman service I get a message telling me "corosync died: could not read cluster configuration".This means nothing to me, nor can I find logs, or anything on the net about this message. I'm ultimately just trying to run a simple GFS2 config on 2 Ubuntu 9.10 desktop nodes, but I can't even get a basic cluster config going.What am I missing? I've been at this for days.
what rpms should I need install for setting up redhat cluster on RHEL 5.0 I want to create two RHEL 5.0 nodes as one cluster having oracle database server installed. And please note I have created these two nodes on VMware server for testing purpose. is it possible for creating cluster of two virtual guests.
I want to set up a cluster for scientific computing (mainly statistical stuff with R). I have a few conceptual questions. First, is there a difference between a Beowulf cluster and a cluster that has single-system image ("SSI," e.g. using openSSI or LinuxPMI)? If so, what's the difference? Second, if there is a difference between Beowulf and SSI, which one is better for scientific computing? Third, does using Eucalyptus make sense for scientific computing or is this more suitable for IO-oriented operations such as web service or databases
I'm planning on setting up a server cluster specifically for 3D video rendering. In order to maximize speed I wanna use OpenGL hardware acceleration for that and I'm pretty sure that I have to use an NVIDIA video card if I want the whole thing working reliably. Will I be able to start an X-Server with GLX on an NVIDIA video card that doesn't have a monitor connected to it? And what will be the maximum "virtual" display resolution that I can use?
Since I wanna have several servers running side-by-side I really don't have the room for any monitors.Just to avoid misunderstandings: It is not my intention to show what's being rendered to anyone in real time. I will only create video files that can be downloaded later. I'm already pretty sure that this will work, probably using "CustomEDID" or something like that, but I don't have a suitable setup available to test it right now.
I'm having a very strange problem with red hat cluster. After testing it in the lab, I tried to install a new cluster and received the following error: cman not started: Cannot start, cluster name is too long or other CCS error /usr/sbin/cman_tool: aisexec daemon didn't start I've checked the internet for that error, but nothing. I decided to to take the example from cluster.conf man, which looks like that :
And still I get the same error. I can find my servers both from DNS (FQDN and short name) and they also appear in /etc/hosts.
I need to configure autostart of oracle database 11g & oracle soa suite 11g after successful OS startup. Linux: Redhat version 5. I have commands needed for startup, but not sure on where to keep the file.
I am trying to do the cluster storage with the Rock Cluster storage operating system . I have install the rock cluster on the main server . and I want to connect the client with the PXE boot . when I starting the client then it boot from the PXE boot into the mode of compute node. but it ask the where is to be stored the I have give the path..then server says that not able to find the directory path.
Steps:- 1) insert-ethers 2) client is started with the PXE boot. 3) it detects the dhcp . 4) At last it demand where is to be started by cd-rom, harddisk ,nfs etc.
then I started with the nfs and I have given the lanIp of server.and server is detecting the client but client is not finding the filesystem directory. into export partition directory but it is not taking path.
PATH:- /export/rock/install it is not finding this path then it is not able to start the o/s from PXE boot..Is there any solution and manual of rock or anyother solution then u can reply to me...
I have read several articles online trying to get a grasp and understand on a simple question. What is main difference and functions of cloud server infrastructure and cluster server infrastructure
To my understanding, the the basic setup is sort of identical in a way. They both have a master server (3 GHz, 2GB RAM 50GB HDD), from there it connectes to a switch, then a number of nodes (each 2 GHZ, 4 GB RAM, 100GB HDD) connect to the switch.
The cluster combines node resources so it looks like one computer (5 nodes = 2GHZ x 5, 4GB x 5, 100GB x 5 = 10GHZ, 20GB RAM, 500GB HDD). If this is correct, this sounds good for possibly Database or File server. The resources on cluster would be constant. Example would be Database Server. While running DB server it starts to slow down because of all the data it is trying to store and retreive. If you ned more resources, just add another node.
The cloud uses only resources it needs to run (same numbers as cluster). This is good for Database or Web Server. Example: Web server is hosting 10 domains each with 20 pages. One hour it uses 3 GHZ and 8 GB RAM, next hour it is getting heavy traffic so it uses 9 GHZ, 15GB RAM.
But from what I have gathered and read, this is how I am interpreting the information and understaing it. Plain and simple.
A Supercomputing cluster running a linux OS would have, say, 100 nodes. The hub (I think that's the term) would be running Gnome, KDE, or some other desktop gui. Any vital programs would also be running from the hub. Any other processes, however, would be delegated to a hub, much like individual cores in a multi-core tower. That way, you could be running 500+ intensive programs, relative to the nodes' power, and they would all run perfectly smoothly.
We have setup a High Available Cluster on two RHEL 5.4 machines with Redhat Cluster Suite (RHCS) having following configuration.
1. Both machines have Mysql server, Apache web server and Zabbix server. 2. Mysql database and web pages reside in SAN. 3. Active machine holds virtual IP and mounted shared disk. 4. We have also included a script in RHCS which takes care of starting Mysql, Apache and zabbix server on the machine which turns active when cluster switches over.
The above configuration holds good if Active machine goes down as a result of hardware failure or Reboot. What if, If any one service say Apache/Mysql/zabbix running on active hangs or become unresponsive.How can we handle this scenario ? Please advice.
I tried the openssi for debian. I've tried linux from scratch but flopped trying to get mosix or kerrighed on it. Anyone got a simple way to go to get a hard drive installed single system image cluster with 32 bit machines. I'm not a new linux user but I have seemed to reach the limit of my skills. I don't like the live cds because I plan to add software later. The live cds don't fit my needs.