Security :: ID Mapping In A AD Integreated File Server Cluster?
Feb 19, 2010Posted in wrong section. Moved to Linux Servers Forum.
View 2 RepliesPosted in wrong section. Moved to Linux Servers Forum.
View 2 RepliesI'm having a very strange problem with red hat cluster. After testing it in the lab, I tried to install a new cluster and received the following error: cman not started: Cannot start, cluster name is too long or other CCS error /usr/sbin/cman_tool: aisexec daemon didn't start I've checked the internet for that error, but nothing. I decided to to take the example from cluster.conf man, which looks like that :
[Code]...
And still I get the same error. I can find my servers both from DNS (FQDN and short name) and they also appear in /etc/hosts.
configure a separate NIC for the heartbeat. what i need to add to cluster.conf to achieve this?
View 1 Replies View RelatedI have a NIS server and a web server as a client. I have a regular linux user (without root privileges) "techsupport1" on NIS server.
On the client web server, I have root user, and my clients. Now what I want to achieve is, allow my user "techsupport1" to access the web server, but instead of logging in using root user, I'd like the client to use username "techsupport1", but in the same time, give that user root privileges on the web server (client). The reason, is that I have more than one user who need to manage the web server (client), so I want to be able to clearly see in the bash_history, who has been running what commands. right now, when I login as a techsupport user to the web server (client) from my NIS server
[code]...
I don't have root privileges, also my gid is matching to gid of a customer who has the same 517 on the web server. How can I configure, so when a tech support agent 1, logs in to web server, NIS grants root privileges, but keeps the techsupport username?
I'm looking for a write one, read many file synchronization, file system or similar technique. In fact I want a to imitate a zfs_send/zfs_receive similar behaviour on Linux. My requirements are pure local (or in terms of distributed file systems: rack-aware) file handling with remote synchronization. Some random features/requirements I want to achieve are (I use the terms file and "file system" here interchangeable, as I don't care wheter I have to synchronize file based or use full file system supporting that):
Access on files from a specific host is pure local, i.e. the node has a full copy of a file, no parts are distributed. Read and write access is local, the remote side gets synchronized with write changes only Write changes on the remote side are non-blocking and asynchronous No need for concurrent file access, concurrent, distributed locking and so on Every file is changed by a single node only, other nodes have read access The file "owner", thus the node allowed to write files must be changeable Similar techniques (some as technology concept, not usable for me) exist, among those DRBD, MySQL Replication or even pure rsync.
I don't care whether I'm referring to a pure local solution (i.e. between two block devices) or a client/server mode. DRBD won't fit because its a peer to peer solution, I need to synchronize a lot of clients with a central storage through that way where DRBD just mirrors from peer to peer among two nodes and doesn't allow another node to take over the role of the primary node (i.e. node A is master, storage B secondary, I want to change to a thrid node C which becomes master and receives hence a full update from storage B.
Just as a guess, I'd make a local LVM volume on every node where write access happens. In the same time there exists a iSCSI target with multipath to a remote hosts which is shared with every node, again providing a clustered LVM. So I'd need to synchronize a local LVM volume with a remote LVM image. The question is just by which technique I could achieve that (i.e. as RAID 0, where every read and write goes to one node only and doesn't wait the other node to succeed). I could achieve this with rsync by a cyclic push to the remote host, but the problem is, that rsync isn't block based thus I'd need to synchronize every time the whole file, not only changes.
I run a compute cluster with only a few users. Occasionally a user will accidently run a job on the master node that runs out RAM/swaps then hanges up for a while.In /etc/security/limits.conf I have set memlock to 7.5GB (master has 8GB RAM) and maybe that is what lets the machine come back rather than hanging completely? Is this the right setting to physocally limit a single user from asking for more RAM than the system has and bringing down the system? Should I set this to 2GB or so or is there something else I can do??
View 4 Replies View RelatedI am trying to do the cluster storage with the Rock Cluster storage operating system . I have install the rock cluster on the main server . and I want to connect the client with the PXE boot . when I starting the client then it boot from the PXE boot into the mode of compute node. but it ask the where is to be stored the I have give the path..then server says that not able to find the directory path.
Steps:-
1) insert-ethers
2) client is started with the PXE boot.
3) it detects the dhcp .
4) At last it demand where is to be started by cd-rom, harddisk ,nfs etc.
then I started with the nfs and I have given the lanIp of server.and server is detecting the client but client is not finding the filesystem directory. into export partition directory but it is not taking path.
PATH:-
/export/rock/install it is not finding this path then it is not able to start the o/s from PXE boot..Is there any solution and manual of rock or anyother solution then u can reply to me...
I want to do port mapping on a linux machine using iptables.I have a service listeneing on port 2000 udp and I want to add iptables rule, which will map incoming packets on port 2001 to port 2000, so that service will accept the connections.The idea is that I don't want to change the default port for the service, but to make internal port redirection from (2001 to 2000), so the default service port will be filtered by iptables, and the other port will be open to the outside. The internet host connects to the linux machine on port 2001. The linux machine change destiation port from 2001 to 2000 and the service (on the same machine) process the packets and accepts the connection.I tried adding the following to my iptables rules, but it didn't work out:
$IPTABLES -A FORWARD -p udp --destination-port 2001 -j ACCEPT
$IPTABLES -t nat -A PREROUTING -i eth0 -p udp --dport 2001 -j REDIRECT --to-port 2000
When a user that has rsa public key set in ~/.ssh/authorized_keys file logs in via ssh an sshd process is started to handle the ssh session.Periodically we audit the authorized keys and remove them from the system and authorized_keys file. This means the next log in attempt will fail, which is fine.However we need to terminate current ssh sessions in progress that use the rsa key.I have not been able to determine a way to map sshd processes with authorized_keys entries.
View 11 Replies View RelatedI am trying to build GFS2 cluster with 2 or 3 Fedora 14 nodes, but I've encountered some problems from the start. First luci does not work at all in Fedora 14. There is no luci_admin and even if I manage to start luci service, I get a blank white screen when I try to open it from the browser. I've googled a bit and found that I'd might be able to setup GFS if I manage to build cluster.conf manually and start the cluster suite, but I cannot find documentation on how to create cluster.conf anywhere. If anyone knows how to setup GFS2 without a cluster suite or how to configure cluster.conf.
View 9 Replies View RelatedI am working in a project that needs to set up an Apache Web Cluster. The cluster needs to be High-availability (HA) cluster and Load-balancing cluster. Someone mentioned the use of Red Hat Cluster Suite, but, honestly, I can't figure out how it works, I haven't been able to configure it correctly. The project currently have two nodes, but we need to support at least three nodes in the cluster.
View 5 Replies View RelatedI need to read many files very fast: reading them from the disk leads to bad performance!!I copied the files into /dev/shm, being sure that they were copied in memory, but the performance didn't improve.Then I created a tmp file system in /mnt (/mnt/tmpfs) and I mounted it withmount -osize=400m tmpfs /mnt/tmpfs -t tmpfsand copied the file in. But the performance still remain almost the same.I've the doubt that I didn't copied the file in memory!The question is: Did I make the right things?I run a FC 11 64 bit on a dual procs server with 16 Gb memory
View 1 Replies View RelatedI have 4 computers/users and we need to put all the files on a central server. The server is running Ubuntu 10.04. What is the best way for these 4 XP users to see the files that will be stored on the server?
Or basically, how will I either share or map the files *from* Ubuntu *to* XP? Also, the users will be reading, writing, creating and deleting files on the server.
Found the below from RedHat Knowledgebase
The Completely Fair Queuing (cfq) scheduler in Red Hat Enterprise Linux 5appears to have worse I/O read performance than in version 4. It appears as though the Completely Fair Queuing I/O scheduler (cfq) has a regression and thus exhibits reduced read-side throughput which can affect performance for both local and NFS mounted file systems.
One way to mitigate this is to set the cfq's slice_idle parameter to zero. To change this value, execute the following command echo 0 > slice_idle in the /sys/block directory appropriate for your situation, as shown below:
echo 0 > /sys/block/hda/queue/iosched/slice_idle
We are using NFS file systems in RHEL 5.3. I would like to know how to find which /dev/Device is being used by the NFS file systems, so that I could try setting the slice_idle to '0' to see if there is any difference in performance? In /etc/fstab I only see the actual NAS volumes for the NFS file systems.
My shell script is calculating the count of each shortcode series wise whose sample output is as follows:
Code:
56882
9124 1
9172 1
9173 4
[code]....
we have configured snv in our server but when we tried to access our svn folder from client its saying path not found error.This is because apache is mapped to tomcat so when we tried to access svn by default it looks to some other path and displaying path not found error.My question is how to restrict apache from forwarding its request to tomcat or else how to stop the tomcat service. I am using centos and i tried with /etc/init.d/tomcat5 stop but it is not getting stopped.
View 3 Replies View RelatedDoes anyone have any experience with clustering KVM? I know how to do this for Xen, but I would like to know if it can also be accomplished with KVM. I am looking for a solution that is not specific for a distro.
View 3 Replies View RelatedCan a Cluster, made up of 5 Dell Poweredge computers, each with a NIC and a HDD, be a file-server that is highly available? If I go over, and break one of the computers with a sledgehammer, will the cluster continue to operate without data loss?Goes on to say how fancy the cluster is,
View 3 Replies View RelatedI've tried many, many different ways of setting up the cluster.conf file, but every time I start the cman service I get a message telling me "corosync died: could not read cluster configuration".This means nothing to me, nor can I find logs, or anything on the net about this message. I'm ultimately just trying to run a simple GFS2 config on 2 Ubuntu 9.10 desktop nodes, but I can't even get a basic cluster config going.What am I missing? I've been at this for days.
Code:
<cluster name="example" config_version="1">
<clusternodes>
[code]...
I put together a 3 PC cluster approx. 6 years ago with Redhat, at that time I was in Grad. School. Now I am in a small company and we are planning to go for a cluster setup. I have chosen to go with Centos5.
The last time around I used NFS, RSH etc. for file sharing, communication etc. Would it be OK for me to go with the same stuff or some new software/technology has come in.
The Centos cluster setup is for high-performance computing.
recently we got a Dell/EMC DX300 storage I want to use it on a HP g4 with CentOS 5.6 and an Emulex LP HBA
* on the storage side the server is logged in, and registered with a RAID6 LUN (0)
* on the server side, the HBA driver (lpfc) is loaded & working
how can I use the SAN directly (WITHOUT multipath/powerpath/... -for first step/testing-)??? I don't have any associated device:
lsscsi:
...
[5:0:0:0] disk DGC VRAID 0226 -
nor in the /dev/disk/by-path or ../* folders
I'm configuring a postfix server for the company I work for and have a question about limiting access by IP address.
First off, we're not using this for SPAM. We're a manufacturing/direct marketing company and will use the email server to contact our salespeople. We do not send UCE. That said, we have had problems in the past with our legitimate email being labeled as spam by a few carriers. This email server is being setup specifically to avoid future problems on that type.
Because of the nature of our business we operate several domains. We want to be able to limit outbound email for a given domain to a single IP Address. For example, say we have have 3 domains - a.com, b.com and c.com - and 3 IP addresses - 1.2.3.1, 1.2.3.2 and 1.2.3.3. We want to set things up so that a.com can only send out email on 1.2.3.1, b.com can only send out email on 1.2.3.2 and c.com can only send out email on 1.2.3.3.
My first impulse is to set these up as virtual domains on the Postfix server but I'm not sure that's the best method. Are there alternatives? What are your recommendations for doing this?
So I am trying to setup a Samba server to share out our SAN environment to our windows clients. This is my first time playing with samba, so running into quite a few obsticals along the way.
Environment:
SLES 10.3
Samba 3.0x (Original RPMs in distro)
My end goal is to allow anyone in our Active Directory environment to access the shared folders from Samba and map the File permissions to 755 and Directory to 777.First I tried just using Kerberos client and winbind and added it to the AD domain. This worked, but mapped the wrong UIDs (the standard 10000 series). Also the permissions were mapped all wrong.Then I had the great idea to use the Server for NIS on windows 2008, it makes the PDC run a NIS domain that is conjoined to AD. This really didnt work at all. I loaded the AD schema with the correct UIDs and all that good stuff, but didnt seem to take.
So how would any of you approach this?Should I keep trying the NIS config, or use Kerb and winbind? Can a box be part of a NIS domain and AD at the same time?
When using cssh, I define some cluster tags in my local .csshrc file. Other parameters seem to be picked up by cssh, but the tags are not.I have no /etc/clusters or /etc/csshrc files. I create a fresh, default .csshrc file and append a line like:myhost = frodoWhen executing "cssh myhost", I get: Can't call method "name" on an undefined value at /usr/bin/cssh line 988This is the same message I get if I make up some destination, like: "cssh doesnotexist". So it looks like rather than picking up the tag "myhost" from .csshrc, and translating the command to "cssh frodo", it does a dns lookup on myhost.I've tried making a /etc/csshrc file like this (but without the equal sign), but that doesn't work either. This is all on Ubuntu 9.10 with cssh version 273.
View 1 Replies View RelatedI use opensuse 11.4 with TightVNC Server version 1.3.9. I use french keyboard mapping (yast configuration).When I connect with putty via ssh all is ok (keyboard mapping). But when I use tightvnc viewer on windows the mapping is not correct.I've verified many time the configuration of the keyboard in yast (graphical) and desktop options (french mapping). I can't map correctly. It seem there is no issue with direct attached keyboard.Another strange behavior is the shift key. I want to use . (need to press shift + and this put a >. But sometime if I press "shift + ," before then "shift + ;" it works.
I've passed long time on google and find no answer yet.Is there a issue with tightvnc server ?A bug with tightvnc viewer ? (I've tried with realvnc and same results).
So I have a few Ubuntu (Hardy till I can find a replacement for Xen) boxes that I am trying move from nfs3 to nfs4.I set it up according to this guide: URL...However I ran into trouble when the client see's all users/groups as nobody/nogroup.The current set up is that all the boxes have synced uids/gids and all users with root access can be trusted. I read some reports that said the only way this could be fixed was by using Kerberos. However I would really prefer not having to move to Kerberos as I have heard that it is very intensive to set up. So what I am looking for here is a solution other than sticking with nfs3 or putting everything on Kerberos. However if you think that Kerberos is easier to set up than I am giving it credit for then that could be useful to hear as well.
View 1 Replies View RelatedI keep getting the error "reverse mapping checking getaddrinfo for fileserver.0.0.10.in-addr.arpa [10.0.0.10] failed - POSSIBLE BREAK-IN ATTEMPT!" in /var/log/auth.log I have a DNS (bind9) setup on my Linux router with the following config:
Code:
router:~# less /etc/bind/named.conf.local
// Local zone definitions here.
zone "0.0.10.in-addr.arpa" {
type master;
file "/etc/bind/db.0.0.10";
[Code]...
I wanted to know how to cluster linux server. I mean any squid proxy server or apache server cluster or any other service, but I just wanted to know how to do cluster in linux. I have been searching it but there is no hope.
View 4 Replies View RelatedI'm trying to build cluster computer system using redhat linux enterprise 5.5. And I'm on the step to set up the NFS mounting. (step 3 of part VI on this paper: However, I kept getting "permission denied" when I used the command "mount -t" to mount the slave node onto the hard drive of the master node. Even when I edited /etc/fstab file to make it automatically set up during booting, I still receive the same result. By the way, I've already logged in as a root user on both slave and master nodes.
View 1 Replies View RelatedI am having an issue with LVM on a 2 node cluster. We are using powerpath for external drives. We had a request to increase /apps filesystem which is EXT3.
On the first node we did:
pvcreate /dev/emcpowercx1 and /dev/emcpowercw2
Then....
vgextend apps_vg /dev/emcpowercw2 /dev/emcpowercx1
lvresize -L +60G /dev/apps_vg/apps_lv
resize2fs /dev/apps_vg/apps_lv
Everything went on well , we got the /apps increased. But on the second node when I do pvs.
I am getting following error:
WARNING: Duplicate VG name apps_vg: RnD1W1-peb1-JWay-MyMa-WJfb-41TE-cLwvzL (created here) takes precedence over ttOYXY-dY4h-l91r-bokz-1q5c-kn3k-MCvzUX
How can I proceed from here?