CentOS 5 :: Yum Suddenly Not Marking Packages For Update
Feb 6, 2011
I have checked the yum.log file and the last time an update was performed was Jan 6, 2011. I am the only person who administers this server and I do it remotely via SSH. No one has GUI access to this. At a minimum, the kernel version is older than the latest release. This is the first time since I brought this server online in 2009 that a monthly yum update didn't produce at least a dozen package updates. I've tried disabling the Priorities plugin (as well as rearranging priorities) and I get the same result: no updates available. For the record, I did spend quite a bit of time Googling this problem and doing a forum search here. I found nothing applicable to my particular situation. Here's the output of yum update:
[root@copeland2 log]# yum update
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
* addons: mirror.5ninesolutions.com
* base: mirrors.easynews.com
* extras: mirrors.usc.edu
* rpmforge: ftp-stud.fht-esslingen.de
* updates: centos.mirror.facebook.net
0 packages excluded due to repository priority protections
Setting up Update Process
No Packages marked for Update
I am trying to update all the packages for 5.4 without going to 5.5, I believe I saw at one point instructions on how to do this, where I could in effect cap the version to stay on 5.4. Doing 'yum update' takes me to 5.5.
I have Centos 5. I enabled the EL repos so I could upgrade the php to the latest, and now there are upgrades that yum is not letting me get. I have a Virtualmin VPN plugin that needs Virtualmin to be updated to 5.10, but it seems when I updated PHP with the EL repos Yum is now reducing my available packages. Here are some outputs. with some things removed for length.
my Internet connection stopped working suddenly after updating the packages using synaptec package manager and rebooting it..can anyone say me why did this happen.BTW,mine is ubuntu 10.10 and i am absolute new to linux.I had to configure my n/w connections right from the scrap again
I recently tried a "yum update" and had some errors. By a process of elimination I have isolated the problem to the recent "file packages" release. Here is the error:
Update Manager used to work fine, but now when I launch it, it appears momentarily in the task bar, then shuts down. I haven't been able to install any updates in a couple weeks. I'm running Ubuntu 10.04.
I just installed ubuntu 10.10, and im triying to update, when i uncheck the packages that i dont want and click on the "install updates" button in the update manager, the update manager check it again and download the packages that i dont want
I was running 10.04 LTS and had decided to stick to the LTS versions as I'm now running my machine as a server and don't want to be updating regularly.Every time I logged in via SSH I got a message telling me there where packages to update including a security update. So I did a search to find out how to perform an update on Ubuntu server from the command line.What I found was to do this:sudo apt-get updatesudo apt-get dist-upgradeAfter doing that I rebooted but now my machine gives me this message:
init: ureadahead-other main process (794) terminated with status 4Your disk drives are being checked for errors, this may take some timePress C to cancel all checks currently inprogressI'm not pressing C yet and leaving it alone to finish, but I noticed when the machine booted that one of the options for booting talked about Ubuntu 10.10, so I'm worried that I've updated from 10.04 LTS to 10.10 by accident?
I have an HP a670y desktop computer using the included wireless antenna (looks like a big Sorry game piece). I am running Ubuntu 10.04 x64 dual booted with Windows Vista x64. I haven't had internet for a while but just got it today and decided to do some updating in both Windows and Ubuntu. It had detected my router and I copied and pasted the WEP key then it asked for my password, but my keyboard wasn't connected so I clicked cancel for the password window and it connected anyway.
I opened the Synaptic package manager and started downloading some updates while I did some browsing on the internet. I had restarted my computer when the updates were done and it asked again for my WEP key. I copied and pasted just as I did before and entered my password and after it tried to connect it asked for my password again. I thought maybe the internet was down so I decided to boot up Windows and it connected right away. I wonder if one of my updates messed something up. Has anyone else had this happen?
I have a massive ZFS array on my fileserver. Whenever a disk reports bad sectors to smartmon, I order a replacement, and I shelve the failing one.
And by "shelving the failing one", I mean that I give it a low-level format if applicable, or a destructive badblocks run to possible claim spare sectors to replace the bad ones, then use it to dump my DVDs (and lately BluRays) on, so that I can use it with my HTPC and bring it with me when going to my friends to watch movies. It's just a really easy and portable way to watch movies with XBMC. I have the stuff on pressed discs already, so I'm not dependent on their reliance, and the dying drive just gets a hospice life serving as quick-access media storage. Keeping in mind Google's reports that drives are 39x more likely to die within 60 days after their first SMART error, I'm expanding that period by the fact that these drives mostly remain on their shelves and are only plugged into the SATA bay once or twice every year.
I'm just saying this to make clear that I'm not confused about these drives dying, and I'm not looking to elongate their lives ;)
So. Sometimes these drives, after a badblocks run, simply claim fresh sectors from the spare pool, but sometimes there aren't any left, and I face the fact that there are bad sectors in my FS. That's not a problem if you use one of a set of linux filesystems, as mkfs.* often takes a badblocks list as input. But seeing as I sometimes bring a drive or two to my girlfriend's (Mac) or one of my friends (usually Windows), I've decided to use NTFS for these things. Up untill now, when a drive had unrelocatable bad sectors, I've just written data to it, re-read it, and files that were bad were put in a "BAD_SECTOR_FILES" folder on the drive.
Sure, it works, but it would be really nice to be able to just mark those sectors bad instead. It's a lot of hassle the other way around.
So I read some posts, of which most quickly switch subject to the often accurate one of "replace your drive!", and some suggest spinrite, but really, I don't see why I should pay that much money for such a trivial task.
The alternative is to use ext3, but I'd like to hear if someone knows how I can feed badblocks output to mkfs.ntfs, so that the bad blocks aren't used. Or if there are other tools (I could use Windows in a VM) that do the same. I'm confused about chkdsk, it seems the bad sectors thing is FAT only?
I recently switched from fedora 14 to 15. Today my computer suddenly shut down during an update, as I thought it overheated I decided to clean the cooling system and reapply thermal paste. However, now the system won't boot anymore ("kernel panic - not syncing : VFS: unable to mount root FS on unknown-block").
I would like to either solve this booting problem, or mount the fedora 15 filesystem and recover some files. Whichever is easier.
I have another drive with fedora 14 (antec below) which boots fine:
I have a CentOS5 box here (actually running the latest release of EasyIDS). Everything was working fine for about half an hour, and now I can't access the box through a web browser on it's IP address. I can't ping the box either. If I log on to the server, I can ping other boxes on the network and external resources, but nothing seems to be able to see it. I checked to make sure HTTPD is running, and it is. The firewall is not running (or at least that's what it says when I do service --status-all).
I run Linux Slackware 13.1. I have a hard drive with a ext2 filesystem, and I would like to mark the filesystem as Clean without running FSCK on it. I think I can do that with the debugfs command, but in the help there seem to be only a command to mark the filesystem as Dirty?? Is there a way to manually mark a ext2/ext3/ext4 filesystem as Clean?
recently i setup new LAMP server , after some days faced a strange problem? suddenly the server load goes to 100 or 80 but there is no wearied process running? the normal lod is between 0.5 to 1.5? the server have 2 hdd on hardware raid 0
My issue is with linux routing tables using iproute2, coupled with the iptables MARK target. When I create a rule to lookup a table with iproute2, and the routing table routes an address as type unreachable (or blackhole, or prohibit), if a higher priority rule does a lookup to another table that routes the address as type unicast but that higher priority rule also matches on a fwmark, the packet to that address is never generated locally to even go through iptables packet filtering/mangling in order to mark it, because the lower priority rule that doesn't match on a fwmark says it's unreachable. For example, I have 2 rules installed with ip:
Code:
10: from all fwmark 0x1000 lookup routeit 20: from all lookup unreach ip route list table routeit
[code]....
Now, in the packet filter, I have an iptables rule to mark packets to destination 10.0.0.5 with 0x1000 in the mangle table and OUTPUT chain. When I generate a packet locally to 10.0.0.5, all programs get ENETUNREACH (tested with strace). However, if I take out the route entry that 10.0.0.0/8 is unreachable, it all works fine and the routes in the routeit table get applied to marked packets (I know because my default gateway would not be 1.2.3.4, but wireshark shows packets being sent to the MAC address of 1.2.3.4).
The best I can surmise is that when generating a packet locally, the kernel tests the routing tables in priority order but without any mark to see if it is unreachable/blackhole/prohibit, and doesn't even bother generating the packet and traversing iptables rules to see if it would eventually be marked and thus routed somewhere. Then I assume after that step, it traverses iptables rules, then traverses the routing tables again to find a route. So is there any way around this behavior besides adding fake routes to the routing table (e.g. routing 10.0.0.5 to dev lo in the unreach table in this example)?
I have a copy of the DVD Iso for centos 5.3. I downloaded the updated packages to the Centos directory and then ran the repomanage perl script to remove the old files from the directory. I then ran the createrepo and the new iso image with the script code below.
I am using VMWare to test the build, so I have the cd pointing to the iso image. I get the CentOS to start up find and dandy asking the questions for the interactive boot. It gets thru the stage of checking dependencies and then when it starts to copy down the image to the "harddrive" that is when the problem occurs.
One of the updated files is file-4.17-15.el5_3.1.i386.rpm (file-4.17-15.el5.i386.rpm was removed using repomanage), but the loader is looking for the removed file. I've looked thru any dependencies, but nothing specific for the removed file, all are asking for /usr/bin/file with no specific version numbers. I have run a rpm -test on all the rpms, but haven't been able to look thru that to see if there is a specific request for the version.
I did try this, but it just moved on to the next file. I did not replace the file version, but then it found another problem that was the same as this, the updated file is in the repo, but it is requesting the old version. I looked thru the fileslist and others to see if maybe that was the problem, but they were updated to the new versions.
I've noticed this happen with every CentOS installation I've done in the past and it's confusing me. On the software template screen select, I always select "Server" and leave the extras option unchecked, I also check "Customise now". The only things I choose are the editors (to get vi), Web server and server configuration tools (and this time also Java). I didn't select any GUI programs, yet it still installs things like X, GNOME components and also samba. Why does it do that? There no way they're needed for dependencies. Is there something I'm missing when selecting the software components? Why does it still install samba when I didn't select it from under "Servers" components? Or have I misunderstood what software selection does and it installs all those components regardless but doesn't automatically turn the services on?
I am trying to do the update of ubuntu 9.10 UNR (it is your netbook equal to every other), but I get a error, it says it cant get some packages from the servers.
When I allow Update Manager to download and install security and other updates it saves the deb package files in /var/cache/apt/archives. I don't really need or want to keep these files. I seem to recall that in versions years ago the default was to delete them after installation. But the issue at the moment is how to get 10.04 and later to automatically delete the files after installation. Space is not a concern on my desktop PC nor my server. It is an issue with various virtual machine installations as they are limited in disk space and the more they take up the more there is to backup.
I have tried telling Synaptic to delete the files after installation but this does not change the performance of Update Manager.
I want to update my fedora (Yum update), but not all of the packages in repositories! Only packages that have size<1MB ! How can I do it? Does have YUM this ability?
if there is a way to blacklist certain packages when updates come around. The reason for this is that I have two repositories that contain Smplayer and Mplayer. But one repository versions of this aren't VDPAU active (but the build is newer).
1. when we install package and run update manager, are all the packages stored in /var/cache/apt/archives ?
2. let's say i make a copy of all the packages i installed and from update, store them in a backup drive. How do i make a freshly installed Ubuntu to copy/use these packages instead of downloading them from the internet?
Question how can I update latex packages is there any easy way to do it in linux, such as in windows (miktex for example).I have installed all the TeX-live packages on the ubuntu software center. But I seem to have outdated older packages.03156397135
E: Type 'ain' is not known on line 3 in source list /etc/apt/sources.list.d/gnome3-team-gnome3-natty.listE: The list of sources could not be read. i get this error when i try to update my packages, anybody know a fix?