CentOS 5 :: Maintaining Multiple Servers To Same Releases?
Aug 10, 2010
We (like many people) have a QA environment and a production environment consisting of several servers. We want to be able to make sure before we do any yum updates on the production machines that we've tested everything in QA. Unfortunately, yum gets the latest software when you run it, so you could run it two hours apart and up with different releases of some software. We need a solution.
I *think* what I have to do is create my own yum repository. There are a variety of articles on how to do that. But before I go through all that, I wanted to make sure I was on the right track. So the process would probably end up being:
1. Create a fresh repository
2. Upgrade QA from that repository, test, etc.
3. Upgrade production from that repository
Can someone verify this is the correct approach, or is there something easier? It also seems like step 1 is going to take a significant amount of time, plus it will continue to take a significant amount of time every cycle.
I've set up a ubuntu server at home with the intention of sharing files with windows clients, so I've installed samba. I have no security issues so I've allowed public access to the shares and I can access them fine from all windows machines. I also need to preserve the dos attributes for files and folders using 'map hidden', 'map system', 'map archive' which works great for files but not for folders. I've got a number of folders from my windows box which I would like to keep hidden (for tidiness more than anything) but when I transfer them to the samba share, they become visible again and I can seem to control their visibility at all from windows or from ubuntu. Do I take it from this that samba can only manage to maintain dos flags on files and not on directories?
This is the relevant part of the samba.conf file Code:
My DB-Vendor supports only Centos 5.2 till now, and i can't find the old releases of centos on this website. Is there an archive with the old versions?
We have to install centos 5.5 in approx 60 servers and we want to have a server in which we can create an image of 1 server and deploy it on all other severs through pxe. Mainly all servers will be having raid 5 or raid 1 configured. So the utility should be having the raid support.
So I have a pretty big networking nightmare on my hands right now. Stepped into the dog crap with this one, told my employer that I knew how to setup vmware servers right? Its not hard, install CentOS, install vmware, run the config tool, bridge the network, down the road we all go right? We have 3 servers running about 10 virtual servers.
Here is what we have all together. CFU <- This is the internet. We have IP ranges xx.18.230 - xx.18.241 Gateway is xx.18.254 and subnet is xx.255.128 DELL PowerConnect 3348 Switch <- This is what everything is pretty much jammed into. VMH1 <- This machine has 2 NICs eth0 connects to the DELL switch somewhere on the upper 30+ ports eth1 connects to the DELL switch on port 1.
It uses firestarter and is the gateway for our internal internet on 192.168.11.XX using IP 11.254. It has 4 vm's on it. One of them is the domain controller, hooked to eth1 using IP xx.11.1. The other one is a server for managing remote backups, it has an external IP linked to eth0 of xx.18.234. The other 2 vm's are for misc remote login stations that use internal ip addresses linked to eth1. It hasn't had a single problem communicating on either one of the ports..
VMH2 <- This server hosts a web server, and some other misc stations. It hosts a web server on xx.18.230 and xx.18.231 It also hosts 2 workstations on a seperate network, through another router that is wireless....
Now, we have the problem child, VMH3 VMH3 <- This hosts...nothing. It sits and has a ton of storage, but does absolutely nothing, but won't communicate out either one of its network ports. The xx.36.xx and xx.22.xx networks are there because we have multiple businesses in the building that shouldn't see each other.
I wonder if there is any fast deploy kernel for multiple servers / VPSs when new kernel is available? Kernel from RHE/CentOS is always old comparing to new versions from kernel.org
what is the best way here? I have like 5 servers, and I want my clients to access each of them, so in case 1 server is down, they can access remaining servers. Also, it will work like user1 chooses the server number and is connecting to a central database, then reply is OK, and he can connect to the server number he wished.
I have a CentOS5 server with dual ethernet adapters + Webmin installed as my Router / Firewall / DHCP server working successfully with 1 static IP from my ISP. I also have 7 additional static IP addresses from my ISP needing to configure to individual servers inside my network. I have configured the additional virtual interfaces, but am lost on how to route data specifically from additional ISP address to specific internal network address.
Below is my desired configuration. 98.173.159.xx1 = eth0 physical interface ==> eth1 192.168.1.1 98.173.159.xx2 = eth0:1 virtual interface ==> 192.168.1.10 ==> CentOS Server 2 98.173.159.xx3 = eth0:2 virtual interface ==> 192.168.1.20 ==> CentOS Server 3 98.173.159.xx4 = eth0:3 virtual interface ==> 192.168.1.30 ==> CentOS Server 4 98.173.159.xx5 = eth0:4 virtual interface ==> 192.168.1.40 ==> Mac OS X Server 1 98.173.159.xx6 = eth0:5 virtual interface ==> 192.168.1.50 ==> Mac OS X Server 1 98.173.159.xx7 = eth0:6 virtual interface ==> 192.168.1.60 ==> Network Attached Storage Server 1 98.173.159.xx8 = eth0:7 virtual interface ==> 192.168.1.70 ==> Windows 2008 Server 1
I'm looking at setting up a couple automated systems: Here are a few examples:
* Internal accounting system to download and process emails * Public web server to visit
I could put each system on its own separate box -- for example, it's generally good practice to separate anything that external users have access to (such as a webserver) from internal processes such as accounting. Now, rather than dishing out the money for two separate servers, could I get away with just installing new instances of VMWare on the same box for each system?
To give you an idea, these are not large scale computationally sensitive systems. The accounting one is simply downloading and tallying emails, and the latter is just a webserver with maybe 5 hits per day on a good day. I could definitely pick up a new box for say $50, but I wanted to know the general practice of using VMWare on the same box versus two separate boxes.
I have a CentOS 5.3 box with three network interfaces in it. Each interface is attached to a separate VLAN and I want traffic to stay on each network segment.What I can�t figure out is why I cannot get each interface to have its own gateway and everything gets sent through the default gateway.The basically takes my possible 3Gb total bandwidth and throws it down a single 1Gb pipe.Then on top of that, if I take down the interface (ifdown) that has the current default gateway,I loose contact to the other two interfaces.When I look at the routes, each one of the interfaces shows the gw as 0.0.0.0 and defers to the default route. So I delete the route and try to add a new route with:
[root@testsan ~]# ip route add 10.1.15.0/24 via 10.1.15.1 dev eth2
I have an Ubuntu machine running NFS4 server and a plugapps (arch linux) machine connecting as the client. The plugbox is running an rsync job to backup the home directory from Ubuntu to a local USB HDD.
All of the files in the destination have owner nobody and group nobody.
how I can mantain the file owners. I have the UID's and passwords sync'd between the two machines for both root and the user who's home dir is being backed up.
I have set up a home network using a modem/router, which my devices connect to via ethernet and wireless. I have got it working but i'm still not happy (stick with me...)!
I have settings configured so as to utilise DHCP, so IP addresses for the different machines are automatically assigned by the modem/router (as i understand it). I then obtained these auto-assigned IPs by running ifconfig on each device. I tested connections between the devices by pinging each other using these IPs (ie ping 192.168.2.2).
BUT I want to be able to use hostnames (ie ping dandelion) instead, and the only way I can make this work is to add hosts and corresponding IPs into the /etc/hosts file.
I have made it work in this way, but doesn't this method defeat the idea of DHCP, as I will now presumably have to manually maintain the /etc/hosts files on each device.
I am trying to use ffmpeg to split a number of videos of different types (WMV, MPG, AVI). I do not want to change anything else, just split them into smaller chunks. The video is split, but the quality of the output file is terrible. I would describe it as "blocky" (I think the correct term is "pixelated"). When I make the player (KMPlayer) much smaller the problem naturally goes away.
Just joined the group to post this question as I can't find a good answer for it.I have an RPM that has the following in the spec:
%defattr(-,someuser,someuser) /opt/myapplication
When I go to install the RPM, the file permissions for everything in /opt/myapplication are set to root:root. This RPM installs properly in RPM based distros and maintains the correct permissions.
When I run alien -i --scripts --veryverbose myapplication.rpm as root, I can see it chmod'ing everything to 755. Has anyone else had this problem? I tried --fixperms as well and that did nothing.
I'm curious if anybody can shed some light for me in this department. We're in a large environment with a Windows DHCP Server. We have been tinkering with LTSP on Edubuntu as thin and fat clients. It works great, but right now we just have 1 server handling the lab, which works fine unless we want to expand, which may be very possible.
These are the instructions I received: Login to your windows server and load the DHCP configuration screen Create a DHCP reservation for the MAC address you obtained Add the configuration options below to enable the machine to boot from the LTSP server 017 Root Path: /opt/ltsp/i386 066 Boot Server Host Name: <ip address> 067 Bootfile Name: ltsp/arch/pxelinux.0 # Specify CPU architecture in place of 'arch', for instance 'i386'
From: [url]
I'm curious, what if I want to have multiple Ubuntu servers on the network that I want to have bootable? For example, let's say I have 3 labs, and 3 servers. Server A to Lab A, Server B to Lab B, and Server C to Lab C. I want all C's computers to boot to C, and B to B, A to A, etc.
1 - How would I add multiple entries on the Windows DHCP Server to allow all 3 (A B C) servers to boot?
2 - How would I be able to isolate the clients so ONLY Lab A clients boot to Server A, etc?
I currently have a group of 3 servers connected to a local network. One is a web server, one is a mysql server, the other used for a specific function on my site (calculation of soccer matches!).
Anyway, I have been working on the site a lot lately but it is tedious connecting my USB hard drive to each computer and copying the files. This means I am not backing up as often as I should...
I have a laptop connected to this same network that I use for development so I can SSH into to the computers, is there any software for ubuntu that can take backups of files that I choose on multiple computers? I know I could rsync but is there something with more or an GUI?
Then I can just every 2 days move the most recent backup from my laptop to the USB drive. Then I will have the backup stored in 2 places if things go kaboom somewhere.
I'm running Ubuntu Server 10.04 and have a secure (SSL/TLS) FTP server on it. However, I'd like to use this FTP server to update programs I made using Microsoft Visual Studio. Unfortunately, in Microsoft's infinite wisdom, secure FTP servers cannot be used. Rather than use an insecure FTP server, I want to set up my secure FTP server to be able to access whatever I need to on the machine, and then add an insecure FTP server that only has access to the directory where I put my update files. I am currently using vsftpd as my FTP server. Is there any way that I can set up two FTP servers on this single machine?
I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
I am using alien to convert an rpm package to a debian package. I need the deb package to check for installed package dependencies and notify the user if the dependent software is not found. In the rpm .spec file there is a section that you can use to specify dependencies such as:
%Requires: java
This dependency is flagged when I install with rpm, but my deb file created by alien doesn't indicate there is a failure.
My MythTV system is running under F12. It is in "appliance mode;" all configured and happily doing the PVR thing without a pressing need for upgrades.However, there is a feature in the upcoming 2.6.32 kernel that I'd like to take advantage of; internal support for a certain capture card.
I see 2.6.32 mentioned as part of the F13 release. My question is, will it also be available for F12 . . . maybe sooner than the F13 release?Another way to put this is: How wedded are Fx releases and kernel releases? Is a major kernel goalpost like 2.6.32 the reason why Fx releases are made?
So yea, why are there so many repositories for the same source (playdeb for example) that specify lucid from maverick or karmic? Why doesn't the repository work on Lucid if it's a repository for Maverick? A big question is why I can't get access to playdeb's stuff on a Lubuntu installation.
i wondered why softwares have different packages for different Ubuntu releases. for example: Miro for Karmic, Miro for Jaunty, Miro for Intrepid, ...[URL]
How do I install a specific one? Or better yet, revert back to an earlier release.
I have two systems running. 2.6.31-22-generic #60-Ubuntu aphrodite release 2.6.31-22-generic #63-Ubuntu amaterasu release
I googled the #60 and #63 and it listed release names highlighted above, is this correct?
If it is, how can I install the aphrodite release?
I have one system working great (#60), while the other exhibits problems playing back 1080p video files (#63). The only HW difference that could affect the dropping frames is the 8200/8300 Nvidia chipsets (with the 8200 working perfectly) and then of course the Karmic releases. I have re-installed the system and files twice and even upgraded new releases of Nvidia drivers (195 > 256) with no success.
My next step is to try and install the same exact release of Karmic if indeed there is a difference between #60 and #63.
My next step might be to use clonezilla and clone the OS (Root) and try that. But then the chipsets for sound are different as well as the ehternet chipset. But that shouldn't impair the video.
June 12, 2011. It appears that quite a few of the alternative repositories that I've suggested in the posts below are no longer functional. Rather than flog a dead horse I'm closing this thread and strongly suggest that you use a supported release of Fedora.The Fedora releases here, Fedora Core 1 through Fedora 12, are no longer supported or maintained, so they do not receive bug fixes or security updates. We do not recommend using these releases any more. I've spent the last day or so installing every Fedora release since Fedora Core 1, excluding Fedora 11, on a computer I had laying around. My goal was to figure out how to get yum to work despite the fact that the stock repositories are long gone in most cases.
I was motivated by the fact that the yum questions are never ending here at Fedora Forum and the question of how to make yum work for these older versions of Fedora seem to be quite common. The usual response is to install the newest and greatest Fedora. That's fine, but there are cases where this is just not possible. I'll outline separately what I've done for each release. You will only get one update, however, you should be able to install any software that is available through these repositories. You could consider adding other repositories if you need additional software.
I've been using the 64bit version of fedora since release 10. I want to know what exactly makes the diffrence between the 32bit and the 64bit releases. I am having some troubles recently regarding some drivers and other issues in my fedora 12 and I was thinking of moving to the 32bit one,
http://www.theinquirer.net/inquirer/...leases-firefoxYawn, didn't they just release FF5 a few days ago? Yes, I know when it was released, just being sarcastic!
if I have a kernel that does not automatically update because I installed it from a deb, will I be missing out on important security updates or the like? I installed the 2.6.34 kernel because I wanted trim support, but am very concerned that I will miss something important.
"Broadcom would like to announce the initial release of a fully-open Linux driver for it's latest generation of 11n chipsets. The driver, while still a work in progress, is released as full source and uses the native mac80211 stack. "
I want to track 2.6.33 kernel releases. I can see that kernel.org has couple of releases like rc1, rc2,rc3, git<>. Kernel 2.6.33. got released early Feb this year. So How many releases of kernel 2.6.33 were there? I checked at kernel.org but couldn't find the complete information.
I have been having mysterious problems with my comp recently and I think it might have to do with my OS not releasing filespace. Previously, my OS partition was full, then I deleted/moved some files, but now it says that still no space is available:
The OS being /dev/sda1 (I am 99% sure, didn't set it up originally) and is CentOS4. As you can see, I have only used 7.6GB, and I should have 300MB available. Only thing I can think of is that when it was full, I moved matlab from there to the /export drive and added in a symbolic link to where it was on the OS drive so it would still work ok. Could this be why the space is not being freed up? We are in the process of installing a 16TB drive so we can free up some space or expand the partition, but somebody else here at work is handling that, so some other option that I could do would be best.