Can a Cluster, made up of 5 Dell Poweredge computers, each with a NIC and a HDD, be a file-server that is highly available? If I go over, and break one of the computers with a sledgehammer, will the cluster continue to operate without data loss?Goes on to say how fancy the cluster is,
I'm having a very strange problem with red hat cluster. After testing it in the lab, I tried to install a new cluster and received the following error: cman not started: Cannot start, cluster name is too long or other CCS error /usr/sbin/cman_tool: aisexec daemon didn't start I've checked the internet for that error, but nothing. I decided to to take the example from cluster.conf man, which looks like that :
[Code]...
And still I get the same error. I can find my servers both from DNS (FQDN and short name) and they also appear in /etc/hosts.
I am trying to do the cluster storage with the Rock Cluster storage operating system . I have install the rock cluster on the main server . and I want to connect the client with the PXE boot . when I starting the client then it boot from the PXE boot into the mode of compute node. but it ask the where is to be stored the I have give the path..then server says that not able to find the directory path.
Steps:- 1) insert-ethers 2) client is started with the PXE boot. 3) it detects the dhcp . 4) At last it demand where is to be started by cd-rom, harddisk ,nfs etc.
then I started with the nfs and I have given the lanIp of server.and server is detecting the client but client is not finding the filesystem directory. into export partition directory but it is not taking path.
PATH:- /export/rock/install it is not finding this path then it is not able to start the o/s from PXE boot..Is there any solution and manual of rock or anyother solution then u can reply to me...
I am trying to build GFS2 cluster with 2 or 3 Fedora 14 nodes, but I've encountered some problems from the start. First luci does not work at all in Fedora 14. There is no luci_admin and even if I manage to start luci service, I get a blank white screen when I try to open it from the browser. I've googled a bit and found that I'd might be able to setup GFS if I manage to build cluster.conf manually and start the cluster suite, but I cannot find documentation on how to create cluster.conf anywhere. If anyone knows how to setup GFS2 without a cluster suite or how to configure cluster.conf.
I am working in a project that needs to set up an Apache Web Cluster. The cluster needs to be High-availability (HA) cluster and Load-balancing cluster. Someone mentioned the use of Red Hat Cluster Suite, but, honestly, I can't figure out how it works, I haven't been able to configure it correctly. The project currently have two nodes, but we need to support at least three nodes in the cluster.
Does anyone have any experience with clustering KVM? I know how to do this for Xen, but I would like to know if it can also be accomplished with KVM. I am looking for a solution that is not specific for a distro.
I wanted to know how to cluster linux server. I mean any squid proxy server or apache server cluster or any other service, but I just wanted to know how to do cluster in linux. I have been searching it but there is no hope.
I'm trying to build cluster computer system using redhat linux enterprise 5.5. And I'm on the step to set up the NFS mounting. (step 3 of part VI on this paper: However, I kept getting "permission denied" when I used the command "mount -t" to mount the slave node onto the hard drive of the master node. Even when I edited /etc/fstab file to make it automatically set up during booting, I still receive the same result. By the way, I've already logged in as a root user on both slave and master nodes.
I am having an issue with LVM on a 2 node cluster. We are using powerpath for external drives. We had a request to increase /apps filesystem which is EXT3.
On the first node we did: pvcreate /dev/emcpowercx1 and /dev/emcpowercw2 Then.... vgextend apps_vg /dev/emcpowercw2 /dev/emcpowercx1 lvresize -L +60G /dev/apps_vg/apps_lv resize2fs /dev/apps_vg/apps_lv Everything went on well , we got the /apps increased. But on the second node when I do pvs.
I am getting following error: WARNING: Duplicate VG name apps_vg: RnD1W1-peb1-JWay-MyMa-WJfb-41TE-cLwvzL (created here) takes precedence over ttOYXY-dY4h-l91r-bokz-1q5c-kn3k-MCvzUX How can I proceed from here?
I have setup a MySQL High Availability Cluster with two nodes using DRBD & Heartbeat. I have successfully installed the cluster but one question which is still not solved is that Heartbeat monitors failover of network or server failure, etc.
What will happen if MySQL service gets stopped or do not start. This heartbeat-DRBD cluster does not take any action on that part. Now I want to enable this feature too which also monitors MySQL service. I have gone through MON help on web but could not find out any good step by step tutorial.
I have a HA cluster, all nodes have same network supplied by one ISP. And if the error occur with ISP the system will fail. So I want to set up HA cluster with different network (supplied by different ISP) between nodes. How can I do this?
What I did not realize was, that DLM uses the external Ethernet Interface even when talking to the local machine/node. So iptables was blocking my DLM daemon. With iptables down or the TCP port for DLM opened, cman starts, mount works.What I have here is a fibrechannel SAN which will be directly attached to several servers in the near future. Thise servers should be enabled access to a single filesystem on the SAN (shared).I heard that the right filesystem choice for this kind of setup would be GFS, because it has a Distributed Lock Manager and one FS journal for each node.
But I am having trouble setting up GFS. I have managed to create a GFS on a small testvolume (local HDD so far), but am unable to mount it. It seems that GFS/DLM needs a lot of cluster services to run, which I do not all understand / know how to correctly setup. Also: Will the lock_dlm stuff need Ethernet communications to handle file locks? And if so, will it fetch the node list from /etc/cluster/cluster.conf to determine who to talk to?
I created a cluster with two nodes and a machine for managers with luci, if a machine reboot the cluster function by transferring the resource (IP address), if forced to stop the machine (pull the plug) the cluster does not work.
I m Not able to do complete my cluster Configuration. I have done my Cluster configuration but when i m running the system-config-cluster command the cluster configuration window appear, but after closing the window it is giving me the following error [root@node1 ~]# system-config-cluster. # when the window opens, I get the below error. When I did a search everthing pointed to the 2 rpms. I verified the version of the rpms cman and openais, they are:
cman-2.0.98-1.el5 openais-0.80.3-22.el5
error code res was not 0, it was 1 error string was /usr/sbin/cman_tool: Cannot open connection to cman, is it running? When I try to issue service cman start, I get the error:
I am using Centos. I have read places that you can use Drbd + heartbeat + nfs to make a simple failover NFS server.I can't find any document that works though. I've tried 20 or so, including some Debian ones.So, does anyone have any other ideas of how to do this?Point me in the right direction please.I want 2 nodes. One to be actively serving an NFS share The other to be ready for failover. If the first one goes out, the second takes over.Meaning, the filesystem is in sync, the IP must change, and NFS must come up
what rpms should I need install for setting up redhat cluster on RHEL 5.0 I want to create two RHEL 5.0 nodes as one cluster having oracle database server installed. And please note I have created these two nodes on VMware server for testing purpose. is it possible for creating cluster of two virtual guests.
I am planning to configure RedHat cluster on vmware, i dont know how to configure, i googled a lot but couldn't find satisfactory docs for configuring, can any one provide me step-by-step or good docs to go ahead.
I have two node redhat cluster for mysql database.The problem is that after updating the packages on both of the nodes after and previously the sevices was not able to relocated on second one , even rebooting the server the problem occurs.While starting the service on second node it started on the first one.Other services are running fine on both nodes.I have checked the /etc/hosts, bonding and many more files and seems to good.find the log for reference.
<notice> Starting stopped service service:hell Oct 22 14:35:51 indls0040 kernel: kjournald starting. Commit interval 5 seconds Oct 22 14:35:51 indls0040 kernel: EXT3-fs warning: maximal mount count reached, running
I have come across an issue. We have an application for the biotechnology. The application is very heavy so we are trying to run it on cluster. We have four Dell workstation 7500. Each is having 32 GB RAM. But I am not getting the exact method of configuring cluster and node. I have tried Conga (Luci, Ricci). My questions are:1) Is it possible that our application can run on cluster ?2) If its possible then how shall i configure it?
We are having a production setup where we are having one SAN storage and two RHEL machines. Now we have created a SAN LUN, say for example trylun. Now we have mounted the same SAN partition on both the machines of RHEL on the same mountpoint path say for example /trylun. After that we have installed RHEL Cluster suite to create a failover cluster.
Now we will be having one Ingres Database service for which data will be stored in SAN storage LUN mounted on both the machine say for example /trylun. When service on one machine will be down then RHEL Cluster Suite failover cluster will takeover and then it will start the same service on another node and handle the failover. Wether Ingres will run from node 1 or node 2 will not make any difference as both are using shared SAN storage i.e /trylun in our example. So same data storage will be used by both the ingres service on both the servers.
Now I have to simulate the same in my office test environment. But the problem is, in office test environment I will not have SAN server as it is additional cost. And I will have fedora operating system.
So I wanted to know is how can we create a shared file system like SAN in fedora (Is NFS a solution). And after creating the shared file system how can we create a failover cluster in fedora if we do not have Red Hat Cluster Suite.
I'm having some trouble configuring clustering in a 2-node cluster, with no shared FS. Application is video streaming, so outbound traffic only...The cluster is generally ok - if I kill -9 one of the resource-applications, the failover works as expected. But it does not failover when I disconnect the power from the service owning node (simulating a hardware failure). clustat on the remaining node shows that the powered-down node has status "Offline", so it knows the node is not responding, but the remaining node does not become the owner, nor start up the cluster services/resource-applications. eth0 on each node is connected via a crossover cable for heartbeat, etc. Each eth1 connects to a switch.
I have a two node cluster, and a third system which has luci installed.
node1 is nfs0 node2 is nfs1
both nodes have identically the same configuration. They have a fresh installation of Centos 5.5 + yum update. I am unable to join nfs1 to the cluster, as it is giving me the following issue:
Sep 29 23:28:00 nfs0 ccsd[6009]: Starting ccsd 2.0.115: Sep 29 23:28:00 nfs0 ccsd[6009]: Built: Aug 11 2010 08:25:53 Sep 29 23:28:00 nfs0 ccsd[6009]: Copyright (C) Red Hat, Inc. 2004 All rights reserved.
I have just installed a two server cluster with ricci luci and conga on centos 5.6 32bit , both servers are vmware guests and have a shared storage disk connected to them both
with a GFS2 file system on them + fencing agents configured to work with VMware Vcenter.
(this is supported by vmware and works great on 4 other centos clusters i have been runing for 4 monthes with no CLVMD).
In this setup i used for the first time CLVMD as recommnded by RedHat so i could have the flexablitly of LVM under the GFS2 file system but , i have been getting some Strange problem with it , some times after a developer has done some IO heavy task like unziping a file or a simple TAR the load goes to 10 - 15 and no task can be killed , trying to reboot the server hangs.
After hard shutting the server every thing works ok until the next time some one does the same IO work as before.
I have configured the RHEL cluster suite in RHEL 5.3 64 bit, I have formatted a storage 100 GB LVM with GFS2 filesystem with lock_dlm protocol default journel I have used 8 I have also added this filesystem in cluster resource in GFS resource option, all work fine cluster is able to remount this GFS2 LVM on another node in time of failover. But when I was checking cluster configuration file (/etc/cluster/cluster.conf) in resource syntex it was showing fs_type=gfs so my query is if it uses fs_type=gfs then is it also mount my formatted gfs2 lvm as gfs filesystem in my cluster node rather then gfs2 filesystem. Also how do i check that mounted filesystem is gfs or gfs2, Please clear my doubt.
Newly trying cluster configuration setup on RHEL5.3_64 bit machine.Basic Requirement :Need to Configuration GFS file systemHerewith I have shared details:
System : > I have 2 HP proliant Dl385 server. > Both system are connecting on Public network. (eth0) > I have connected eth1 - directly each other system like a Private Network (May be I am
I just wanna simulate HPC and and other kinds of clusters in VMware workstation 7.0 , in my HP 520 laptop which is dualcore and 3Gb RAM. So can u please help me out regarding this.. I am interested to work in clusters. I am new to this hpc and other clusters..please can any one gve me document on cluster installation and configuration. I would be grateful to them.. I am using Centos OS.