I found in the following mail thread: [URL] That we can add, netgroup: caching compat and also the netgroup caching rules stanza in nscd.conf But the mail thread is for FreeBSD Unfortunately, I can't get any reference for RHEL 5.x I need to do exactly same stuff. The following line in my nscd.conf was enuf to leave me disheartened, :'(# Currently supported cache names (services): passwd, group,hosts The nsswitch.conf on my system works only with following: netgroup: nis Does neone knows if netgroup caching is supported by nscd.conf.
I will be relocating to a permanent residence sometime in the next year or two. I've recently begun thinking about the best way to implement a home-based network. It occurred to me that the most elegant solution might be the use of VM technology to eliminate as much hardware and wiring as possible.My thinking is this: Install a multi-core system and configure it to run several VMs, one each for a firewall, a caching proxy server, a mail server, a web server. Additionally, I would like to run 2-4 VMs as remote (RDP)workstations, using diskless workstations to boot the VMs over powerline ethernet.The latest powerline technology (available later this year) will allow multiple devices on a residential circuit operating at near gigabit speed, just like legacy wired networks.
In theory, the above would allow me to consolidate everything but the disklessworkstations on a single server and eliminate all wired (and wireless) connections except the broadband connection to the Internet and the cabling to the nearest power outlets. It appears technically possible, but I'm not sure about the various virtual connections among VMs. In theory, each VM should be able to communicate with the other as if it was on the same network via the server data bus, but what about setting up firewall zones? Any internal I/O bandwidth bottlenecks? Any other potential "gotchas", caveats, issues? (Other than the obvious requirement of having enough CPU and RAM).Any thoughts or observations welcome, especially if they are from real world experience in a VM environment. BTW--in case you're wondering why I'm posting here, it's because I run Debian on all my workstations/servers (running VirtualBox as a VM for Windows XP on one workstation).
I am trying to setup a Caching-Name Server for my lab but I am unable to locate the named.caching-nameserver.conf under /etc directory. I am trying to use this file as a template.I already checked the /usr/share/doc/bind-* for samples but unable to find it.I am using RHEL5 with bind and bind-chroot packages installed.
Can someone tell me where I can find named.caching-nameserver.conf file? Also, I notice that there isn't /etc/named symbolic link... do I have to create the sym link to /var/named/chroot/etc/bind.conf
I have configured Master and Slave DNS server in Red Hat Linux 4 Enterprise. I want to know about what is a Caching Nameserver and in which situation we use it? If there is a master and slave DNS server we can use cache name server as well ?
I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related.So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller.My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working.So that leaves me to think that my caching server is not forwarding properly.
For example, this AD is going to have a naming convention of hostname.mydomain.local.If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local.However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain) Here is my named.conf file:
I am looking into creating a web caching server for myself using fedora 10. I believe I need to use squid for this but it seems to have a lot of features. Basically, all I want for now is to be able to cache web pages that I and my network users use the most, increasing access time and lowering the load on my internet connection. Can squid do this and can someone point in the right direction on an article on how to configure such a thing?
Basically, i have a clustered filesystem using GlusterFS. This is ultimately going to host a very large number of files.
It is mainly used as a storage destination for backups, and historical copies of files.
Remote servers sync using unison every few minutes. A local script will run over the whole filesystem once per hour looking for new files/folders, and files that have been updated based on their timestamp.
99% of filesystem access is browsing the directory structure, listing directory contents and checking the modification times of files. Access to the actual content of a file is minimal. Only a tiny fraction of the filesystem is actually modified from hour to hour.
GlusterFS alone is quite slow when browsing the directory structure. (ie. "ls -Rl /data") The speed of things for actually transferring file content is sufficient for my requirements.
What I need is to vastly improve performance when running operations such as "ls -Rl /data". (/data is the mount point)
I believe the best way to do this is to implement caching. The cache options within GlusterFS are simply not sufficient here.
My first thought was to re-export the GlusterFS mount with NFS, and then mount the NFS share and set the cache on the client to a very long expiry. (like 86400 = 24 hours) It is my understanding that any change made to a file using the mount point will invalidate the cache entry for that file. (it is only mounted in one place, so no changes possible at the back end.)
I did this using the kernel based NFS server, but ran into major problems with the "Stale NFS" errors which from reading is due to a problem related to FUSE that doesnt sound like its going to be fixed soon. Aside from the Stale errors, this did provide a suitable boost in performance.
I tried the beta of GlusterFS that has the integrated NFS server (so presumably, no FUSE) but I could not get this to compile properly on our servers.
Finally, I tried using the Gluster patched version of unfs3 that uses boost to talk to Gluster instead of FUSE. Now this works, but for some reason the NFS client cache doesnt seem to cache anymore.
One last thing that I was looking at is the possibility of running a simple cache layer in front of either GlusterFS or NFS. I believe Cache-FS is the tool for the job but I have been unable to get that to work - I believe it is disabled in my kernel or something. (mount command says cachefs is unknown)
I am running Ubuntu 8.04 on most servers, but have upgraded one to 10.04 to try and get around Kernel limitations. My servers are all 32 bit (I know, not recommended for GlusterFS) and its very difficult for me to change this. (its a live system)
I quite simply need to add a cache for the directory structure information, and then maybe export this with NFS so that it can be mounted on a *single* server. (the cache can be on the server where it is mounted if required, but due to the large size of the cache - it may be better to have a server dedicated for the cache)
I am running GlusterFS 3.0.5 in a replicate/distribute manner.
There is this server that is running a lot of websites and runs varnish for caching for performance boosting. But I want to somehow remove certain URLs from caching which change frequently. But I do not want to remove complete domains from caching but certain URLs from the websites. Is there any way to remove those pages from caching?
Is there anything like a persistent caching proxy available in linux for me to configure, ie not public? (persistent meaning the cache remains in hard disk between reboots) Is it possible that it NEVER looks for any update to a page that is available in the cache?
I'm running into a little trouble trying to configure bind as a caching dns server on centos 5.6. for debugging purposes I've got iptables and selinux turned off, but I can't get see the dns service on my local network. on my server itself I can run nmap against it and see that port 53 is open, but if I try it from another computer on my network the port is closed.
is possible to edited the default RHEL CD to have it automatically install RHEL based off of a kickstart file that I will store locally on the CD. My plan would be to put a cd in a server and have the OS automatically being installed.
We are planning to migrate our LINUX server from RHEL 3to RHEL 5. What are the configuration difference between RHEL 3 to RHEL 5 for webserver installations?
what i need, I got two servers for about 4000 users and 300 servers and well the guy never setup dns caching right, so im redoing it. Now my goals
1) DNS cache 2) Transparent Squid Cache only 3) Load Balance - at switchlevel
Upgraded Hardrives to SSD 2x32gb each server 4gb of ram 2x Dell poweredge 850's - p4 2.8 (single cores) So any advise , pointers , expeirnces and best ways to do this being both server will do both dns caching and squid! Also is bind9 the best for this?? i seen stuff about DNSmasq what performs better( i dont need DHCP)
I was wondering if in RHEL5_3 if you can add netgroups to your kickstart file like when you are jumpstarting Solaris boxes? Trying to see if I can have a group of machines use a certain DHCP or DNS server during my kickstart.
I've set up a caching nameserver on my laptop running Fedora 11. The problem with this is that NetworkManager always overwrites the entry that points to the local nameserver. NetworkManager no longer respects /etc/dhclient.conf or at least its scripts run after dhclient.conf. Also it doesn't respect /etc/sysconfig/ network-scripts/ifcfg-* setting of DNS{1.2}.The man page of NetworkManager describes scripts that run in /etc/NetworkManager/dispatcher.d which can be run when interfaces are brought up and down. I've written a script that will put the entry needed for the local nameserver.
I have installed bind 9.3.6 on a centos 5.4 virtual machine. i have successfully installed it and i just want to run it as a caching server but it is not working.I mean when i dig any url it gives connection time out messages .
I checked /var/log/messages on my machine and there were two kinds of messages:
Now there are a lot of errors of the above two kinds only. The only difference in the errors are the ip address that is different.
I have a database server running RHEL 5.1 32 bit that suffered some catastrophic failures about 6 months ago. We were able to patch it back together and keep it running, but now the manufacturing site it supports is going to shut down for two weeks and I would like to replace it permenantly. Does anyone have any guidance for that sort of thing? I'd like to have the new server up and running before hand, basically changing the hostname/ip and restoring the databases only on conversion day. I've done this in the past with HP UX - Red Hat conversions, but this is my first red hat to red hat move. Any advice or shortcuts?I forgot to add the other wrinkle. The new server will be running 64bit linux.
I am getting these error in RHEL 5.3 while i did "yum update".
---> Package libstdc++-devel.i386 0:4.1.2-46.el5_4.2 set to be updated ---> Package libstdc++-devel.x86_64 0:4.1.2-46.el5_4.2 set to be updated ---> Package libstdc++44-devel.i386 0:4.4.0-6.el5 set to be updated
[code]....
The program package-cleanup is found in the yum-utils package.
I cannot ssh into an RHEL 5.5 server (192.168.20.104) from another RHEL 5.5 server (192.168.20.101) unless server debug is turned on 192.168.20.104, and even then, I have to wait several minutes before the connection is established. scp to and from the 104 server is also not working.Here is the debug output on the 101 server when server debug is not enabled on the 104 server-:
Having some issues setting up sendmail on a (basically) blank RHEL 5.5 server setup. My ultimate goal is to be able to automagically send logs / errors / notifications to ourselves from the server.
Our basic setup is a Win 2003 domain with exchange running on mail.domain.com.au.
I've edited the '/etc/mail/sendmail.mc' and added the :
Code:
line to it.
Also added the domain (domain.com.au) to the '/etc/mail/local-host-names' files
Also edited submit.mc and added
Code:
When I try and send a mail from root or a test user to one of the domain accounts, it seems to go fine, i.e no errors are reported but it never gets delivered.
From the mail logs:
Code:
So it seems to be sent to the queue no problems and when I check the queue :
Code:
Total requests: 0
Not nothing ever gets received. Am I missing something? I have read and read and read but dont seem to be getting any furthur.
So in the end this server doesn't need to do anything except be able to send mail from root to an external mail address.
In oreder to run an application software on RHEL 3 ES server, I created a link forcefully using following command from root id: # cd /lib64/tls/ # ln -sf libc-2.3.4.so libc.so.6 before that I copied file libc-2.3.4.so from a workstation with OS RHEL 4 WS so that a link can be created. Now I am unable to run any command except cd & pwd and it gives error messaage as given below: ls:relocation error:/lib64/tls/libc.so.6:symbol _rtld_global_ro,version GLIBC_PRIVATE not defined in file ld-linux-x86-64.so.2with link time reference.
Before running this command libc.so.6 was pointing to libc-2.3.so file in path /lib64/libc-2.3.2.so. I am now unable even to open a new window on the server.Please send me some solution as early as possible because this server is running production data and many users are runnig application on this server.
we are running a Red Hat Enterprise Linux ES release 3 (Taroon Upd 5) Kernel 2.4.21-32.ELsmp since several years. The server hosts an old ERP system who will be replaced at the end of the year.However it is necessary that some collegues are able to write some files to that server regulary. Since we are running Windows 7 on several machines, those users aren't anymore able to write to the samba share. Getting files from the share works fine.
But the problem seems not to be situated at the samba service because also the transfer using SSH (WINSCP) from any Win7 system to the server doesn't work.During testing we recogniced that transfering files smaller then 1kb works fine ... any file greater then 1kb ends up in an connection abort. This works with samba and also using SSH.All the workarounds editing some registry entries in Win7 for improving the interoperability between vista / win7 and samba don't work for us ... and also seem not to be the source of the problem.Is there a general known incompatibility between our RHEL version / kernel and Windows 7 regarding file transfers?