Server :: Upgrade Server With All Updated Packages And Patches?
Jun 27, 2011
Currently our Production Server version is Fedora8. I know its very old version, i was newly joined as server admin for this company.. my first task need to Upgrade Server with all updated packages and patches..Without production time down..because we have nearly 400 clients accessing our server.
1. Is it possible to do Without Production loss??
2. before upgrade what are the things i need to do??
3. is there any possibles the working function not working in new upgrade packages??
Just did apt-get update then apt-get upgrade then apt-get install linux-generic linux-headers-generic linux-image-generic fdutils linux-doc linux-tools Now at: Linux 2.6.32-31-generic on an x86_64. But upon console login after rebooting into the new kernal:
[Code]...
Why would Ubuntu insist there are so many updates available when neither apt-get nor synaptic seem to find anything to upgrade???
I just did an update on my Debian system and it was very long. I'd like to know now, after the upgrades have already been applied, which packages were upgraded and which were not.
I am somewhat new to linux, and fedora especially, I'm currently trying to get a linux based active directory server build in my home. I've tried using samba, dhcpd and the bind9 service but it wasn't wanting to work, so I did some searching and found 389ds on the fedora projects page. Now I'm having issues setting up the directory here's the log.
Code: [11/05/29:10:37:47] - [Setup] Info This program will set up the 389 Directory and Administration Servers. It is recommended that you have "root" privilege to set up the software. Tips for using this program: - Press "Enter" to choose the default and go to the next screen - Type "Control-B" then "Enter" to go back to the previous screen - Type "Control-C" to cancel the setup program
[11/05/29:10:37:47] - [Setup] Info Would you like to continue with set up? [11/05/29:10:37:49] - [Setup] Info yes .....
[11/05/29:10:37:50] - [Setup] Info Your system has been scanned for potential problems, missing patches, etc. The following output is a report of the items found that need to be addressed before running this software in a production environment .....
What is the right way to keep updated the time on a server? Using ntpd daemon or ntpdate by crontab? I've two server in two different locations.. I've used
Quote: # ntpdate ntp1.ien.it
on both server, and the two times were staggered by ten minutes. How is it possible?
I'm trying to turn an old Acer Aspire One with a tiny 8GB solid-state drive into a lean web server, so I'd like to remove as many packages as possible to free up space. It will be running a standard LAMP install and nothing else. Right now it has Ubuntu Netbook installed, so I need to know everything I can delete and still have it boot and run mysql, apache, etc.
I don't know why clamav antivirus is not update even if I try using "freshclam". In Terminal, it say up to date. But in ISPConfig 3 Interface, it is not.
IIm running several Webservers on CentOS 5.2. Due to the Hosting-Platform I use, it is recommended that there will not be CentOS 5.3-Updates installed. So I searched the net a lot now but didn't find a propper solution. Is there a way to tell yum only install patches and updates for version 5.2 and not to upgrade to 5.3?
i am integrated ADS with squid and its working fine. In squid server end i have used "net ads password" to update new password for the user and it successfully updated. Issue is update not doing immediately it takes long time to update the password , even i restarted the smb and winbind services. I want to updated the password immediately in the ads server. is this possible ?
I have somewhere between 8 and 12 servers, spread amongst different webhosts, all running CentOS 5.3. Everything on the servers has been compiled from source: > EngineX, PHP, MySQL, Munin, etc
I compile from source because I usually run ./configure to modify the source before installing. What would be the easiest way to keep the server software updated? I am considering creating custom packages from a server which I would keep up to date manually. Then use Capistrano or Puppet to install those packages on all of my servers.
want to sync 2 folders, one on a desktop and the other on my server. My objective is to keep the desktop folder always updated with the content of the server folder. If I get this working, I can do the same with the rest of my desktop and laptop users. When online they can run a script with rsync and update data. Is it possible to get 2 way sync?
I have configured master and slave Bind servers. Everything works fine. But whenever I add a new zone entry at master server it is not getting updated at slave server in logs I see this error: client 192.168.1.1#43428: view external: received notify for zone 'yourdomainname.com': not authoritative
At master server I do not see any error or warning message. This error clearly indicates that named.conf file does not have zone entry in it or domain name is wrong. While checking the named.conf file I see that the zone entry has not been updated at slave server. If I update it manually and reload named on slave then zone files (db files) are getting created without any issue and any modification at master server for the zone records are also getting updated. My concern is why zone record is not getting appended at slave server in named.conf file.
Is there anything I am missing in the configuration. I am pasting the steps which I have followed to configure my master and slave server: Configure Bind as master and slave server Install Bind on your server yum install bind OR sudu apt-get install bind9 Generate RNDC Key using the command rndc-confgen -a -k rndc-key it will stored in /etc/rndc-key file Master Server IP 192.168.0.1 Slave Server IP 192.168.1.1 Master Server Configuration options .....
we're about to migrate a set of workstations (ubuntu 10.04 LTS) to a new kerberos/LDAP setup. Basically, this requires the installation of some required deb packages and to copy some new .conf files over the original ones.We made a deb package having these "features":requires the needed other packages as dependenciesbacks up original conf filescopies the new conf files to the right places (i.e. /etc/krb5.conf,/etc/ldap.conf)The problem is: apt-get complains because the deb is "touching" files owned by other packages (kerberos, ldap, etc.). Therefore, the only way to skip this check is either to force apt-get to proceed or using the "replaces" directive in the deb control file, specifying the clashing packages. omething like this:
I have installed CentOS 5.3 with Xen on a PowerEdge 2650 machine with 6GB of RAM. As usual, PAE was already enabled so I did not have any problem with utilising all of the memory. However after upgrading to the latest release of the kernel (2.6.18-128.7.1.el5xen), memory available decreased to 4GBs. Then I switched back to the old kernel and 6GB was there. Then finally have switched back to the new kernel and 6GB is again there. So now PAE seems to be enabled but what concerns me is the inconsistent behaviour. (Also I am not sure the reason was the upgraded kernel.)
I'm building a new backup server, migrating from Centos to Ubuntu 10.04 LTS and upgrading to Bacula 5 all at the same time.Is there a way to find out why there's a 3 month lag? 5.0.2 was released in April, and the currently available packages are 5.0.1.Also how can I find the policy on future updates ? I'd really like to use the core-provided packages but don't want to end up way behind after a year or two.
How do I use apt-get or aptitude to tell me what updated packages are available for my system? I'm moving over from Gentoo where I had a cron job that would run a command whose output was a list of available updates. I had this and other system related info emailed to me. I'd like to duplicate that under Ubuntu, but I can't find a way get the available updates.
I am using SBOpkg, but for the ~30 3rd party packages I have most of the slackbuilds are already out of date, and I don't expect they will be updated when security issues or whatever arise.
So is the only solution to subscribe to the individual mailing list for each piece of software, or is there a more automatic way?
I did my update back when F13 was first out. A lot of the F12 packages are still on the machine. Should I be concerned. My latest thing was dealing with the Kmod-Nvidia packages left behind from F12. Someone suggested removing them and once I did my updater did its job perfectly.
So, the question is do I need to remove all the F12 packages or should I wait until there is another conflict? Secondly, if I should remove them, is it a search and destroy mission or can I simply nuke them all in one grouping?
where does all the updated packages get saved in my computer in which I have installed Ubuntu 9.10? its really hard to download all those large files in slower Internet connection and the backups are unknown.please help me if i can save those downloaded packages in other devices.
I am really enjoying recently slackware. Using slackpkg and sbopkg it is so easy to keep slackware running current. Sometimes I feel like a master of Linux, which is absolutely no true. Apart of some objectives I have regarding slackware's philosophy, I have this question please. I have about 8 packages installed via sbopkg. my question.
1)Is there a way to keep these packages updated via a program? (If not I guess I am obliged to check manually for each of them.) 2) Also, sometimes when we build a package via sbopkg it is necessary to build other packages. Is there an option to install via sbopkg and the package and its dependencies required through just one command?
I am running 2 Ubuntu servers, one as UEC frontend and the other as UEC node. I started, last night, the server upgrade on the frontend. As it promised to take several hours I left it alone to proceed through the night. This morning it was waiting on a prompt which, by virtue of hitting esc to relight the monitor, it continued upgrading packages. At some point subsequent to that it paused on a discrepancy between the eucalyptus.conf file the install wanted to save and the eucalyptus.conf file that existed. I reviewed the deltas and determined that the current, as it existed, would suffice. I received 3 more messages on the console:
Installing new version of config file /etc/init/eucalyptus-network.conf. Installing new version of config file /etc/init/eucalyptus-conf... eucalyptus start/running process 15290 Then no further console messages nor apparent disk or cpu activity I can shell into the machine and see the following: Code: root 7077 6784 0 Apr29 tty1 00:00:33 /usr/bin/python /tmp/update-manager-0r8nH5/natty --mode=server --frontend=DistUpgradeViewText
I've just upgraded by dedicated server with OVH from 9.04 to 10.04 and it now refuses to boot with the error "kernel panic unable to mount rootfs on unknown block 2,0" With OVH you must use their custom kernel, but this has been installed and seems to boot thus far, but fails at above. OVH also recommend adding a dev line to fstab, which I've done: [URL] I'm now at a loss as what to do, as it seems the kernel can't see the hard drives, yet when I boot up in rescue mode, I can mount all the drives fine.
I had a running server (mandrake10.1) that I wanted to transfer to a better version of linux, so, I decided to install in a new hard drive the new version and adding as slave the old hard drive that it contended data files. When I finish all the installation I start to try to find the old data files but I din't find, (/dev/hdb), the hd is mounted already, but when I look inside all files are hide.
Before going to sleep I just closed my laptop without shutting down, but the next morning when I unlocked I had no sound. Does anybody see something suspicious in the updated packages?