Server :: Deploy Centos 5.5 Through Pxe Agent On Multiple Servers?
Apr 14, 2011
We have to install centos 5.5 in approx 60 servers and we want to have a server in which we can create an image of 1 server and deploy it on all other severs through pxe. Mainly all servers will be having raid 5 or raid 1 configured. So the utility should be having the raid support.
I wonder if there is any fast deploy kernel for multiple servers / VPSs when new kernel is available? Kernel from RHE/CentOS is always old comparing to new versions from kernel.org
we're about to migrate a set of workstations (ubuntu 10.04 LTS) to a new kerberos/LDAP setup. Basically, this requires the installation of some required deb packages and to copy some new .conf files over the original ones.We made a deb package having these "features":requires the needed other packages as dependenciesbacks up original conf filescopies the new conf files to the right places (i.e. /etc/krb5.conf,/etc/ldap.conf)The problem is: apt-get complains because the deb is "touching" files owned by other packages (kerberos, ldap, etc.). Therefore, the only way to skip this check is either to force apt-get to proceed or using the "replaces" directive in the deb control file, specifying the clashing packages. omething like this:
I'm looking at setting up a couple automated systems: Here are a few examples:
* Internal accounting system to download and process emails * Public web server to visit
I could put each system on its own separate box -- for example, it's generally good practice to separate anything that external users have access to (such as a webserver) from internal processes such as accounting. Now, rather than dishing out the money for two separate servers, could I get away with just installing new instances of VMWare on the same box for each system?
To give you an idea, these are not large scale computationally sensitive systems. The accounting one is simply downloading and tallying emails, and the latter is just a webserver with maybe 5 hits per day on a good day. I could definitely pick up a new box for say $50, but I wanted to know the general practice of using VMWare on the same box versus two separate boxes.
We are going to deploy a new network with PCs and servers. until here, Windows was choosen. However, because of legacy applications, Linux is now chosen. Therefore, I am looking for something to replace Active Directory. I know there are software for eacho prupose (BIND for DNS, But I would like to have a central tool with integration between services as Active Directory can provide.
The services I need are: - DHCP - DNS - Directory for central authentication (workstations are not on stand alone) - File shares - Binary repository for upgrade (as WSUS on Windows) - NTP
I am looking for a server solution that would allow me to deploy Windows Desktops and software to clients on the network. Ideally I am looking for a Linux server solution but would consider a Windows Open Source alternative.
In the past I have used Symantec Altiris Deployment Server.
I'm curious if anybody can shed some light for me in this department. We're in a large environment with a Windows DHCP Server. We have been tinkering with LTSP on Edubuntu as thin and fat clients. It works great, but right now we just have 1 server handling the lab, which works fine unless we want to expand, which may be very possible.
These are the instructions I received: Login to your windows server and load the DHCP configuration screen Create a DHCP reservation for the MAC address you obtained Add the configuration options below to enable the machine to boot from the LTSP server 017 Root Path: /opt/ltsp/i386 066 Boot Server Host Name: <ip address> 067 Bootfile Name: ltsp/arch/pxelinux.0 # Specify CPU architecture in place of 'arch', for instance 'i386'
From: [url]
I'm curious, what if I want to have multiple Ubuntu servers on the network that I want to have bootable? For example, let's say I have 3 labs, and 3 servers. Server A to Lab A, Server B to Lab B, and Server C to Lab C. I want all C's computers to boot to C, and B to B, A to A, etc.
1 - How would I add multiple entries on the Windows DHCP Server to allow all 3 (A B C) servers to boot?
2 - How would I be able to isolate the clients so ONLY Lab A clients boot to Server A, etc?
Does Linux have any patch management software/solution which can distribute the patches to linux and windows clients OR is it possible that we can deploy the patches from Linux to windows machines
Recently we are receiving lots of spam email specially ( Viagra). I want to use spamassassen to filter the spams so I need to configure procmail as local mail delivery agent.At the moment I am using prosfix for mail delivery agent and our server is Centos 5. I did google it but couldn't find a good instruction.
We (like many people) have a QA environment and a production environment consisting of several servers. We want to be able to make sure before we do any yum updates on the production machines that we've tested everything in QA. Unfortunately, yum gets the latest software when you run it, so you could run it two hours apart and up with different releases of some software. We need a solution.
I *think* what I have to do is create my own yum repository. There are a variety of articles on how to do that. But before I go through all that, I wanted to make sure I was on the right track. So the process would probably end up being:
1. Create a fresh repository 2. Upgrade QA from that repository, test, etc. 3. Upgrade production from that repository
Can someone verify this is the correct approach, or is there something easier? It also seems like step 1 is going to take a significant amount of time, plus it will continue to take a significant amount of time every cycle.
I'm quite new to Linux and totally new to CentOS. I need it to isntall a Lotus Domino Server, and for security reason i'd like to install the Acronis Linux agent to backup this server with my Win Acronis enterprise server. I have installed a new CentOS server 5.4 on a Vmware virtual machine, then installed the Acronis Linux agent following the instruction here [URL].. The agent seems to work because with the command "/etc/init.d/acronis_agent status" the sistem respond with "Acronis Agenti is running"
But i can't connect to the agent from the server console!
I have disabled the firewall to avoid any connectin problem...
So I have a pretty big networking nightmare on my hands right now. Stepped into the dog crap with this one, told my employer that I knew how to setup vmware servers right? Its not hard, install CentOS, install vmware, run the config tool, bridge the network, down the road we all go right? We have 3 servers running about 10 virtual servers.
Here is what we have all together. CFU <- This is the internet. We have IP ranges xx.18.230 - xx.18.241 Gateway is xx.18.254 and subnet is xx.255.128 DELL PowerConnect 3348 Switch <- This is what everything is pretty much jammed into. VMH1 <- This machine has 2 NICs eth0 connects to the DELL switch somewhere on the upper 30+ ports eth1 connects to the DELL switch on port 1.
It uses firestarter and is the gateway for our internal internet on 192.168.11.XX using IP 11.254. It has 4 vm's on it. One of them is the domain controller, hooked to eth1 using IP xx.11.1. The other one is a server for managing remote backups, it has an external IP linked to eth0 of xx.18.234. The other 2 vm's are for misc remote login stations that use internal ip addresses linked to eth1. It hasn't had a single problem communicating on either one of the ports..
VMH2 <- This server hosts a web server, and some other misc stations. It hosts a web server on xx.18.230 and xx.18.231 It also hosts 2 workstations on a seperate network, through another router that is wireless....
Now, we have the problem child, VMH3 VMH3 <- This hosts...nothing. It sits and has a ton of storage, but does absolutely nothing, but won't communicate out either one of its network ports. The xx.36.xx and xx.22.xx networks are there because we have multiple businesses in the building that shouldn't see each other.
I am trying to monitor server throughput with a centralized ntop instance running as NetFlow aggregator and various NetFlow probes (nProbe, fprobe) on the Servers.ntop shows the probe as NIC correctly and receives the data, but it only shows one Host under "Hosts", which is the server itself. I expected to see a host list just like it is shown when running ntop locally (i.e. the server ntop runs on and every host he contacted separately). This happens both when using nProbe and fprobe. Have I misunderstood the concept of NetFlow Aggregation or am I using ntop/nprobe wrong?
what is the best way here? I have like 5 servers, and I want my clients to access each of them, so in case 1 server is down, they can access remaining servers. Also, it will work like user1 chooses the server number and is connecting to a central database, then reply is OK, and he can connect to the server number he wished.
I have a CentOS5 server with dual ethernet adapters + Webmin installed as my Router / Firewall / DHCP server working successfully with 1 static IP from my ISP. I also have 7 additional static IP addresses from my ISP needing to configure to individual servers inside my network. I have configured the additional virtual interfaces, but am lost on how to route data specifically from additional ISP address to specific internal network address.
Below is my desired configuration. 98.173.159.xx1 = eth0 physical interface ==> eth1 192.168.1.1 98.173.159.xx2 = eth0:1 virtual interface ==> 192.168.1.10 ==> CentOS Server 2 98.173.159.xx3 = eth0:2 virtual interface ==> 192.168.1.20 ==> CentOS Server 3 98.173.159.xx4 = eth0:3 virtual interface ==> 192.168.1.30 ==> CentOS Server 4 98.173.159.xx5 = eth0:4 virtual interface ==> 192.168.1.40 ==> Mac OS X Server 1 98.173.159.xx6 = eth0:5 virtual interface ==> 192.168.1.50 ==> Mac OS X Server 1 98.173.159.xx7 = eth0:6 virtual interface ==> 192.168.1.60 ==> Network Attached Storage Server 1 98.173.159.xx8 = eth0:7 virtual interface ==> 192.168.1.70 ==> Windows 2008 Server 1
how I can deploy a WAR file with a Tomcat that happens to be installed on a server. I divided it to different small questions but no strait answer so far.
These are the things I've done:
created a myapps folder inside the tomcat directory, and copied the WAR file there.
These are the questions:
1. I did not touch anything inside the Tomcat folder, anything need to be changed?.
2.How to start the Tomcat so that it can deploy th e WAR file?
i installed apache 2.2.15 in my system its working properly.... but i deployed some php files in /usr/local/apache2/htdocs ... i was not able to run that php files ..... if i run that php file in browser it shows content of that file... for an example i created one test.php file(it contains <?php echo "hello world" ?>) and put it in /usr/local/apache2/htdocs... if i run in browser using this [URL] its shows in browse like this <?php echo "hello world" ?>......
I am running Centos 5.5 with Apache 2.2.3, MySql 5, and PHP 5.1.6. I am migrating a Drupal installation to the default html folder for development purposes. I am very new to server management, and a bit lost.I want to install some other web sites on the sandbox server to experiment with before uploading them to a Production environment. Is it possible to have multiple html folders? Or to use symlinks to point to the folders where the other web sites will reside?
I am currently working on managing multiple linux servers in remote locations, servers particularly user for web hosting. I need to backup data to a backup server but rsync which i currently using doesnt helps is there any tool to backup every server with out modifying it bcos there are hundreds of servers so installing a tool in every server is time consuming process.
by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch.
I need to deploy Windows Server 2008 R2 onto a server that is currently running Linux. Effectively, I want to restore a Windows disk image onto the Linux system hard disk.I don't have physical access to the machine, so I need to find a way to do everything remotely, using SSH (no KVM). And the Linux machine only has one hard disk - the one containing the OS. However, I might be able to create a partition in free space at the end of the hard disk to store the image (I might need some help with the Linux commands). Or perhaps the image file can be pulled via FTP.
I tried Acronis but was disappointed to find that it doesn't seem to allow me to overwrite the system partition (unlike the Windows version of Acronis, which is capable of doing this with a restart).
I'm curious how I can use a Windows client with two separate accounts to connect, at the same time, to a SMB server hosting two shares. (Provided permissions and accounts are all in order)
Scenario:User1 is always logged onto a Windows client mapped to a Public share on a Linux SMB server.I need a way to keep User1 connected to the Public share and then when needed, allow User1 to provide User2's credentials to connect to a Restricted share.The only way I've been able to do this is to disconnect from the Public Share then reconnect to the Restricted share using User2's credentials. (This is the issue because I need to keep User1 connected to the Public share).Is this a limitation of SMB? Or am I missing a configuration? Please point me in the right direction
create an Apache web server with multiple user accounts, for work. Each user needs to be able to upload his/her files via SSH Each user needs his/her own web directory, (preferably in their home directory for ease with permissions) There web directories need to be password protected Only one user account (mine) should be allowed remote SSH control. It needs to be easy to add new users to the system.
I'm trying to configure lighttpd to send SCGI requests to different ports, depending on what file(s) are accessed. Is this possible? This is what I've tried, and it hasn't worked.
I have multiple video streaming servers(Red5 running on machines internally on LAN. For different subdomains.Ubuntu 10.04 The front end to the is apache2 on a Bastion Host. To be able to reach the streaming server I embed a javascript in HTML pages as follows
Code:
<embed ..... var="rtmp://site1.my_domain.com" >
[code]....
how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests.
Will there be any issues installing and then subsequently running a Microsoft Windows Server 2008 R2 installation on a VirtualBox VM on a Linux host (Ubuntu 11.04 64-bit)? I require Windows Server 2008 R2 for a course I'm taking, and I dont have any systems to install/deploy it onto.
Host Machine Specs:
Ubuntu 11.04 64-bit 4GB RAM 350GB Disk Space Nvidia Quadro system
I am tring to run a few game servers on CentOS 5.5, this is a headless server I am renting and I do have root level access and am able to install or run anything. For me to run a game server I need issue the following command: ./r1q2ded-x86_64 +set dedicated 1 +set ip 69.172.231.46 +set port 27911 +set game lithium +exec server.cfg &