I have configured the RHEL cluster suite in RHEL 5.3 64 bit, I have formatted a storage 100 GB LVM with GFS2 filesystem with lock_dlm protocol default journel I have used 8 I have also added this filesystem in cluster resource in GFS resource option, all work fine cluster is able to remount this GFS2 LVM on another node in time of failover. But when I was checking cluster configuration file (/etc/cluster/cluster.conf) in resource syntex it was showing fs_type=gfs so my query is if it uses fs_type=gfs then is it also mount my formatted gfs2 lvm as gfs filesystem in my cluster node rather then gfs2 filesystem. Also how do i check that mounted filesystem is gfs or gfs2, Please clear my doubt.
I am trying to build GFS2 cluster with 2 or 3 Fedora 14 nodes, but I've encountered some problems from the start. First luci does not work at all in Fedora 14. There is no luci_admin and even if I manage to start luci service, I get a blank white screen when I try to open it from the browser. I've googled a bit and found that I'd might be able to setup GFS if I manage to build cluster.conf manually and start the cluster suite, but I cannot find documentation on how to create cluster.conf anywhere. If anyone knows how to setup GFS2 without a cluster suite or how to configure cluster.conf.
I have lack of understanding of CentOS in general. I have looked for a remedy on other forums and google, but haven't been able to find the answer. I have a 3 node cluster that was functioning great until I decided to go offline for awhile. My config is as follows: node 2: vh1 node 3: vh2 node 4: vh6 All nodes connect to a common shared area on an iscsi device (vguests_root)
Currently vh2 and vh6 connect great, however since putting the machines back online I can no longer connect with vh1. A dmesg command on vh1 reveals the following: GFS2: fsid=: Trying to join cluster "lock_dlm", "Cluster1:vguest_roots" GFS2: fsid=Cluster1:vguest_roots.2: Joined cluster. Now mounting FS... GFS2: fsid=Cluster1:vguest_roots.2: can't mount journal #2 GFS2: fsid=Cluster1:vguest_roots.2: there are only 2 journals (0 - 1) .....
what rpms should I need install for setting up redhat cluster on RHEL 5.0 I want to create two RHEL 5.0 nodes as one cluster having oracle database server installed. And please note I have created these two nodes on VMware server for testing purpose. is it possible for creating cluster of two virtual guests.
I am getting frustrated to the fact that I am in charge of developing a Enterprise class solution for my company and I don't find the right answer on the WWW. I have 3 mail servers that I want them to share a Global File System (GFS) that will be mounted via ISCSI on a RAID1+0 SAN Md3000i device. All the documentation that RHEL offers include the setup of DRDB or VLS Clustering.
My intentions are not to setup clustering on these servers, at least for now, however I want them to share access to the same block device, giving enough write/read access throughput . My question is, have you done something like this ? Do you know of any good Tutorial that you tried, and worked for you ?
I'm having a very strange problem with red hat cluster. After testing it in the lab, I tried to install a new cluster and received the following error: cman not started: Cannot start, cluster name is too long or other CCS error /usr/sbin/cman_tool: aisexec daemon didn't start I've checked the internet for that error, but nothing. I decided to to take the example from cluster.conf man, which looks like that :
[Code]...
And still I get the same error. I can find my servers both from DNS (FQDN and short name) and they also appear in /etc/hosts.
I need a cluster-safe filesystem for a SAN shared storage in Slackware. Red Hat's GFS is the best/only solution?Given GFS support in a custom kernel, what about tools (mkfs,mount...)?
I have a 2 node RH 5u4 64-bit. I have installed and configured the latest Veritas CFS (Cluster File System) which also uses Veritas Cluster Server. File system if VxFS. Storage is on EMC Symmetrix arrays with Veritas Mirroring between the arrays.We have noticed that running 'du -hs' on the shared directory/filesystem where it takes about 3 minutes on one node and 30 - 45 minutes on the other node.I've been running strace on 'du'.'du' runs an 'lstat' on each file (66,000+ files). On the slower node, the ave time spent in 'du' is about .001 seconds longer, which accounts for the 30-45 minutes. Also, the standard deviation is much larger, which means to me that the lstat times are all over the place!Another interesting thing is that iozone profiling shows that the i/o rates from both nodes are darn near identical, with no anomalies at various buffer & file sizes! And, iostat looks really good as does 'vxdmpadm iostat show
I'm trying to recover a GFS2 partition on a SAN that was connected to a server that was recently kickstarted with the "clearpart -all -initlabel". Is this possible? The volumes are quite large (20TB). I'm currently in the process of using parted's rescue feature but so far that has been unsuccessful.
We have a set of two production machines running Oracle databases. There are a couple of SAN attached filesystems of which one of them on the one machine (node1) is created as ext3 and nfs exported to the second machine (node2). However,during certain conditions related to rac, the interconnect between the two nodes lose connection and due to the loss of communication the servers will reboot. The problem however is that node2 usually reboots first and by the time it starts up, node1 is not up and running yet, thus causing the nfsmount to not be available on node2. I have put my head around some options on how to get the servers to automatically resolve this, however I am posting my question here as someone might have a reliable way of managing this. My one idea is to create a script on node2 to mount the nfs filesystem, create ssh key authenticated user between the nodes and then put another script in place on node1 as part of the startup to ssh to node2 and mount the filesystem.
When I try to boot to OpenSUSE I get the following error during boot-up: unknown filesystem type 'reiserfs' could not mount root filesystem - exiting to /bin/sh$
This only started happening quite recently - before this I could boot to Linux quite happily.
I'm trying to mount a remove filesystem onto my own server. I am able to do this, however I can only access it as root, or if I chmod 777 the lot. Obviously I want to be as secure as possible, so I'd like to avoid either one of those options. Another option is to mount it directly into my home directory, but previously when I was trying out Ubuntu this caused Samba problems - and I was advised mounting in my home dir was a workaround rather than a proper fix.
I have root access with sudo on my own server. I've not set a root pasword, and until I need to I'll avoid it. I have a user account with full control over my own home directory on the remote server. I am mounting using fstab - sshfs#username@remoteserver:/media/sdk/home/username/ /media//remote/ fuse user,idmap=user 0 0
What I would like to do is without changing the permissions on the remote server change the permissions when they are mounted on my own server. I would like them to be in the group sambausers for example. Instead they are owned by root and in the group of 1024 (which I have not set). Additionally for this to work they would have to have 770 on my home server and 700 on the remote server....
I am trying to do the cluster storage with the Rock Cluster storage operating system . I have install the rock cluster on the main server . and I want to connect the client with the PXE boot . when I starting the client then it boot from the PXE boot into the mode of compute node. but it ask the where is to be stored the I have give the path..then server says that not able to find the directory path.
Steps:- 1) insert-ethers 2) client is started with the PXE boot. 3) it detects the dhcp . 4) At last it demand where is to be started by cd-rom, harddisk ,nfs etc.
then I started with the nfs and I have given the lanIp of server.and server is detecting the client but client is not finding the filesystem directory. into export partition directory but it is not taking path.
PATH:- /export/rock/install it is not finding this path then it is not able to start the o/s from PXE boot..Is there any solution and manual of rock or anyother solution then u can reply to me...
I installed Ubuntu Server 9.10 in a virtual machine, and I'm trying to install the VMware Tools but I can't mount the installer CD: $ sudo mount /dev/scd0 /media/cdrom mount: unknown filesystem type 'iso9660'
i'm using RHEL 5.0 with ext2 or ext3 filesystem, now i wish to use ext4 without loose this Opreating System.... or without using another distro of linux...
Summary of issue: EXT4 filesystem won't mount--with error = mount: unknown filesystem type 'ext4'. Is no ext4 in kernel the issue? Or is something corrupted?Really perplexed by this. I updated Centos 5.5 to 5.6 to get ext4 (5.6 is supposed to have full support of ext4). I built several arrays and put the ext4 filesystem on them. All went well until I tried to mount them. BTW, this array (below) is set up as a RAID6 using partition 1 of #8 2TB drives.Bear with me here; just trying to be complete and not waste your time.
Attempting to mount give this:[root]# mount -v /dev/md1 /asc/array1mount: unknown filesystem type 'ext4'Note: it does "fake" mount with ption (which apparently does everything except the system call):[root]# mount -f -v /dev/md1/dev/md1 on /asc/array1 type ext4 (rw,grpquota,usrquote)e2fsprogs:Package e2fsprogs-1.39-23.el5_5.1.x86_64 already installed and latest version (for Centos 5.6; CentOS 6x uses the 1.41...)
is possible to edited the default RHEL CD to have it automatically install RHEL based off of a kickstart file that I will store locally on the CD. My plan would be to put a cd in a server and have the OS automatically being installed.
We are planning to migrate our LINUX server from RHEL 3to RHEL 5. What are the configuration difference between RHEL 3 to RHEL 5 for webserver installations?
I am working in a project that needs to set up an Apache Web Cluster. The cluster needs to be High-availability (HA) cluster and Load-balancing cluster. Someone mentioned the use of Red Hat Cluster Suite, but, honestly, I can't figure out how it works, I haven't been able to configure it correctly. The project currently have two nodes, but we need to support at least three nodes in the cluster.
I have a database server running RHEL 5.1 32 bit that suffered some catastrophic failures about 6 months ago. We were able to patch it back together and keep it running, but now the manufacturing site it supports is going to shut down for two weeks and I would like to replace it permenantly. Does anyone have any guidance for that sort of thing? I'd like to have the new server up and running before hand, basically changing the hostname/ip and restoring the databases only on conversion day. I've done this in the past with HP UX - Red Hat conversions, but this is my first red hat to red hat move. Any advice or shortcuts?I forgot to add the other wrinkle. The new server will be running 64bit linux.
I'm having some trouble with my Ubuntu 10.04. It had been working normally, and now my startup isn't occurring normally. The bootloader shows my Windows install and two different revisions of Ubuntu. Upon letting it load Ubuntu, however, it goes to BusyBox v1.13.3 and says "built in shell (ash)" and presents me with a functioning command line.
I'm comfortable with working on the command line, but does anyone know where to start with this?
I was able to successfully mount the NFS share from my Ubuntu workstation (10.1.10.204/24) but it failed to mount from the RHEL 6 server which is 10.1.1.50/24 with the error below:
Code: [root@cmtools /]# mount 10.1.1.31:/data share mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified
I think the RHEL 6 server is missing some RPM's or packages to allow NFS mounts or something to that nature.
Does anyone have any experience with clustering KVM? I know how to do this for Xen, but I would like to know if it can also be accomplished with KVM. I am looking for a solution that is not specific for a distro.
Can a Cluster, made up of 5 Dell Poweredge computers, each with a NIC and a HDD, be a file-server that is highly available? If I go over, and break one of the computers with a sledgehammer, will the cluster continue to operate without data loss?Goes on to say how fancy the cluster is,