Ubuntu Installation :: Kernel Panic On Boot After Upgrade
Jan 25, 2011
I've been trying to get Ubuntu on my beloved 4 year old Acer desktop that's been chugging like a tank. However, after either a fresh install or upgrade, I would get the following error:"Kernel panic - not syncing: VFS: Unable to mount to root FS on unknown-block(0,0)".I've looked here and there, and one of the issues it would seem is the kernel not recognizing my hard drive if I'm correct. One of the suggestions was to upgrade the kernel however, I have no idea how to do such a thing if I can't get into the OS.
I am running Debian squeeze. A while ago I upgraded my kernel to 2.6.38 from backports. Just now I thought it would be good to upgrade to 2.6.39 from backports. Upgrade went fine, but after rebooting I get a kernel panics rightaway.
"No filesystem could mount root, tried:" "Kernel panics = not syncing: VFA: Unable to mount root fs on unkown-block(0,0)."
This is the first time one of Linux installations halts/panics on booting, so I don't know what to do now. I tried booting the recovery entry from the grub boot menu, but same result.
I ran an upgrade to 11.04. It boots fine into KDE, then panics (flashing Caps lock) about a minute later. The Live CD runs fine. Here's the part of my syslog that looks related:
I learned yesterday that doing a massive upgrade on my system while moving was a BAAAD idea.The upgrade process was going along just fine, it was all downloaded and was actually installing that last time I saw it. I unplugged my laptop to move a bookcase out, and completely forgot to plug it back in. After I got back from moving a load, I found my computer was off. I tried to boot up, and I got an error that kind of freaked me out. It reads as follows:
Code:kernel panic not syncing vfs unable to mount root fs on unknown-block(0,0)I'm able to get to grub just fine, and my windows partition loads up just fine, but any of the linux kernels fail similarly when I attempt to load them.I'm sure that I can fix this, I just have no idea how. Probably has something to do with a live cd.
I had 9.10 installed on my IBM Lenovo Thinkpad, x301. I was performing updates as normal, and chose the Upgrade button to upgrade me to 10.04. Everything started fine, but upon reboot, no bueno, Kernel Panic!
The exact message was "Kernel Panic ubuntu - not synching VFS Unable to mount root fs".
I thought this was a grub issue. Since grub2 now is installed... But it was not. I think it ended up being a problem with some of my configuration files.
I have three kernels I tried: 1. 2.6.32-22 2. 2.6.31-21 3. 2.6.31-20
The first threw me into the kernel panic. The second would hang on "init crypto disks" The third would hang on "checking battery state"
I noticed (from reading another thread) that while these are loading up that you can click on alt-ctl-F1 thru F6 and get prompt. (I also had my home directory encrypted, thought that was part of the problem, but it wasnt).
Once I get passed the loging, I am able to poke around. I tried manually start Gnome via "sudo service gdm start", but it failed. Said it was missing a configuration file. Then I tried on "sudo dpkg-reconfigure gdm" and it would not work. Saying some configuration files are missing or broken. It also said something about dpkg --configure -a, I am assuming this configures everything...
So I tried "sudo dpkg --configure -a" And selected 'y' to every option. Which basically installs the package creators default settings, and viola! Works.
Just wanted to share that knowledge for the other stuck in the upgrade hell.
Normally I would just copy my files off and reinstall, but it was encrypted... Another headache. I guess good in case someone stole my laptop.
I got a notification that there was an upgrade available today in ubuntu 9.10 64, after the update i restarted my system and while booting i encountered this error message:
Kernel panic - not syncing : VFS : Unable to mount root fs on unknown - block (8,17)
does this have something to do with the OS looking at the wrong hd? theres no command prompt to actually do anything and i tried booting in safe mode and had the same problem. Let me know what i can do!
I installed mythbuntu from a live CD on an old machine with an IDE hard disk just to play with it. It worked a treat, so I bought an Acer Aspire Revo R3610 and used dd to copy the old HD contents to the new one, and then gparted to resize the ext4 partition.
On boot I got - Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)
I checked every grub2 command and it all looked fine. I then tried booting the original kernel that the live CD installed (2.6.31-14) and it booted! I tried 2/6/31-17 again, and got the panic.
After much googling, I found a suggestion to do - sudo update-initramfs -k all -c -v and this worked.
Is this expected behaviour? One of the delights of Linux is that installs move smoothly between machines, but this one didn't. Was this due to the old machine using IDE and the new one AHCI SATA? How do I get an initramfs that's flexible and will boot smoothly on any given hardware?
i used dual boot system (xp + ubuntu 10.04) and decided to replace them with jolicloud os and then it started In about 86% of jolicloud install it showed me error of hdd partition. I tried different versions of partion type, used also option to install on entire hdd - none of my tries actually worked (also with many hdd formating tries) so I was able to use only usb as live user for Jolicloud, and i burned ubuntu iso on disk - on boot up showed boot up error, i checked BIOS and everything seemed to be ok with boot order. So after many tries i took my old Gutsy Gibon 7.10 live cd. I first updated it with LTS version (8.04 Hardy Heron) and after that I updated it to 10.04 but it seemed to not finish when i saw some error in terminal and installer was exiting 13 minutes before finish then it had kernel panic - fuuuuck! After that i tried to download small versions of linux like Austrumi and puppy linux , but it showed me boot error or cd didnt even open install dialog without showing boot error.
I am trying to install Fedora on my computer but I am getting a kernel panic at liveCD boot after boot menu. It occurs to me for F13 and F14 (all x64, F14 x86 seems to boot fine but I'm trying to host a x64 guest OS on it so I need to get the x64 version to work)
My system specs: Dual Opteron 265 4GB RAM Asus K8N-DL (nVidia nForce Pro 2000, BIOS 1010)
I also tried to install F14 in some other computer (which worked flawlessly) and put the HDD into the computer in question, which gave me the same kernel panic.
I recently upgraded from FC7 to FC10 (Also, the Kernel from 2.6.22.9 to 2.6.27.19).I did the upgrade through Yum (First installed the fc10 rpms, then did yum upgrade).Now the new Kernel won't boot.These are the options in GRUB for the new Kernel
Today I upgraded one of my computers with the following command
% yum upgrade
Before the upgrade the computer was running CentOS 5.3 with the versionlock plugin and kernel 2.6.18-128. The update went smoothly (no dependency problems).
If I try to reboot with the new kernel (2.6.18-194), I get the following:
Found volume group VolGroup00 using metadata type lvm2 2 logical groups in VolGroup00 now active mount: error mounting /dev/root on /sysroot as ex3t: No such device setuproot: moving /dev failed: No such file or directory
[Code].....
If I reboot with the previous kernel (2.6.18-128), everything is fine.
I have the following strange thing with a RHEL4 installation. Since last week, the system did a reboot and now something is really fucked up. During boot we get the following messages (don't care about 'strange' typo's, my colleague typed it 'blind' from the screen)
Code:
The strange thing is that we never see a 'could not mount blabla' or similar messages. First we thought it was a failing kernel update by plesk, but even after manually updating the kernel with RHN RPM's, still the same message. Booting with rescue mode and then chroot the system works. After that we even can start things like plesk and so on.
We double checked things with another RHEL4 install, and at least two things were odd:
1: the working machine has /dev/dm-0 and /dev/dm-1, the broken one doesn't
2: some files on /dev didn't have group root, but 252
We tried to recreate the /dev/dm-X nodes with [vgmknodes -v], output:
Code:
A fdisk /dev/sda shows: /dev/sda2 XX XXX XXXXX Linux LVM (I removed the numbers because this line is from another machine, but rest was identical)
We have a copy of the boot partition so if one need more info please let me know.
grub.conf:
Code:
last part of init extracted from initrd-2.6.9-78.0.8.ELsmp.img:
Yesterday i was prompted by my update manager to update some packages. I really don't remember which but i updated.
After reboot the box now kernel panics. I don't believe i get a ubuntu splash screen, and at this point i can't figure out how to get to my grub, in case a kernel was upgraded and it's possible to boot to different kernel.
i noticed some words like 'mantis', 'oops' and 'dvb_core' was a part of the text i get on the screen.
Can this be because of a dvb* upgrade that breaks something important ?
If needed, i should be able to boot via liveCD, or i could take a picture of the errors.
I have a system that was upgraded from Debian 7 to 8. Unfortunately it is not able to boot from the new kernel 3.16. Only the old 3.2 kernel is able to boot. I could transfer a backup, install it in Virtualbox, redo the upgrade and I can reproduce the error..The last error before "panic" is this line
Code: Select all 59.073579] Freeing unused kernel memory: 216K (ffff8800017ca000 - ffff880001800000) Loading, please wait... [ 59.226154] systemd-udevd[53]: starting version 215 [ 59.326564] random: systemd-udevd urandom read with 4 bits of entropy available Begin: Loading essential drivers ... done. Begin: Running /scripts/init-premount ... /init: .: line 210: can't open '/scripts/init-premount/ORDER' [ 59.552148] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000200
this is what i did i downloaded the latest stable kernel archive from kernel.org and extracted the archive into the download directory (i don't think that matters though) then i downloaded and installed the ncurses archive (needed for menuconfig) then i opened a terminal and navigated to the directory that was extracted from the archive and issues the floowing commands
I started installing the beta of Maverick a few days ago, but the update server was running inordinately slow, so I canceled it. Several times, I have attempted to continue the installation, but I've been unable to reach the server. Shortly after this happened, I could no longer boot normally--I get the error:
Code: Kernel Panic - not syncing:VFS:Unable to mount root fs on unknown-block(0,0) Choosing the previous kernel fixes the problem.
So, I obviously want to upgrade to the release version now. When I open the update manager, I get asked if I want to do a partial upgrade to complete the install. I'm a bit leery of doing this since I only have one previous kernel to go back to (my list got really long and I have another operating system entry underneath, so I set my automagic boot manager to only keep two), and if I can no longer boot after the upgrade, I'll have to use Windows until it gets fixed... So, should I finish the upgrade, try to troubleshoot the error, or do something else to jump right to the latest release, after being partway through the upgrade?
Edit: The PC is broken and I'll buy a new one. Don' bother reading this thread.
I have Kubuntu 9.10 installed on my Acer 7520g laptop. It used to work fine.
But recently my laptop has often went into kernel panic, sometimes at boot, sometimes after hours of normal usage. Today it has not been able to boot properly from HD a single time.
When I boot it up in recovery mode, it kernel panics and displays this (normal mode displays just the graphical progress bar, no text):
Code: 00010c0f [ 9.620019] TSC f0ed3f8e6 [ 9.620019] PROCESSOR 2:60f82 TIME 1270650316 SOCKET 0 APIC 0 [ 9.620019] This is not a software problem!
[Code]....
I'm sure this is a pure software problem, because Ubuntu 7.10 LiveCD boots up normally and I got X, Gnome and everything working. So CPU must be working perfectly.
I read the logs in /var/log but there were no error messages.
I prepared a ISO from ubuntu 8.04.3 server using which kernel 2.6.24-24-server.My installer file is in initrd is iecho Preparing to install cd / mount -nt tmpfs tmpfs tmp
iecho partitioning hard disk, creating filesystem mknod /tmp/sda b 8 0 parted --script /tmp/sda mklabel msdos parted --script /tmp/sda mkpartfs primary ext2 0 -- -0 parted --script /tmp/sda set 1 boot on
iecho configuring filesystem mknod /tmp/sda1 b 8 1 tune2fs -j -i 6m -T now /tmp/sda1 iecho mounting filesystem mount -nt ext3 /tmp/sda1 /mnt
iecho mounting CD-ROM mknod /tmp/hdc b 22 0 mkdir /tmp/cdrom mount -nt iso9660 -o ro /tmp/hdc /tmp/cdrom . . . . My menu.lst file is
I decided to try Linux recently. I downloaded CentOS-5.3-i386-bin-DVD.iso. Installation was successful but the OS reports Kernel Panic while booting. At first I can see messages:
Memory for crash kernel (0x0 to 0x0) notwithin permissible range PCI: BIOS Bug: MCFG area at e0000000 is not E820-reserved PCI: Not usin MMCONFIG
The OS continues to boot, but then shows kernel panic.
I tried to solve this problem during several days. I've installed the OS at least 10 times changing conditions. I tried x86-64 version, disabled PAE in BIOS, performed manual partitioning without LVM. The result is the same.
My PC has Intel Core 2 Duo E8400 CPU (no overclocking), 4GB RAM (DDR2-800), ATI 3470 GPU. Windows is working smoothly on it, I can encode video during several hours without problem. All fans are working, the system is not overheated. MemTest86 showed no errors.
I've installed CentOS on my laptop (Core 2 T5600 CPU, 2GB RAM, Intel G945 Video), it works nice, i like it very much.
For testing I've installed OpenSUSE 11.1 on the PC, everything is ok, I'm writing this message from it.
when i was in office, something funny happened. my laptop power chord seemed to be disconnected and was running on battery power. i was away from desk and by the time i came back, laptop had gone to sleep may b due to battery getting drained.i packed up and came home, plugged in and booted but ubuntu seemed to give lots of erros and didnt boot. i went to my win partition, burnt the iso i had earlier downloaded and kept on win partition for safety. i booted off the cd and issued sudo fsck command from terminal. fsck did its job and gave a clean chit for my dev/sda5 (where ubuntu is).
We have not actually purchased support on a 2nd seat yet, so I can't go to them for this yet, and purchasing the seat may be silly if the machine can't run the OS.
I have tried several times to install RHEL 6 workstation onto a server machine. It has a dual drive RAID filesystem, whose configuration I had nothing to do with. The install procedes nicely but Displays a mdam error 127 before shutting down for the first real boot.
When booting a weird progress bar with at least 3 colors proceding at different rates displays for about 5 seconds followed by a very verbose kernel panic error which mentions tainted swap and scheduling while atomic. I suspect the error that caused the mess runs off screen too quickly to record.
Does anybody have a clue what might be happening? Even if I install minimal this happens so it appears to be a very low level hardware problem rather than a corrupt package.
I should mention that the machine runs on ubuntu 10.10 just fine, but my Lab PI wants to run it as a redhat system.
I have a 8.04 x64 server installation that is receiving a kernel panic on boot no matter which kernel I choose even recovery mode.
[ 27.738620] raid1: raid set md0 active with 1 out of 2 mirrors Done. Begin: Running /scripts/local-premount ... kinit: name_to_dev_t(/dev/disk/by-uuid/c224854e-37bd-491a-b895-48e8bf07fe00) = md1(9,1) kinit: trying to resume from /dev/disk/bby-uuid/c224854e-37bd-491a-b895-48e8bf07fe00 [ 31.665733] Attempting maual resume kinit: No resume image, doing normal boot... Done. [ 31.717461] kjournald starting. Commit interval 5 seconds [ 31.717471] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. init: Error parsing configuration: No such file or directory [ 32.064009] Kernel panic - not syncing: Attempted to kill init!
I have built a new server, now I am trying to restore my db from mysql. I have a backup that is out dated, I was hoping there was a way to copy the files over and have them work on the new installation. I can boot the server with the installation cd using recovery mode, I can get to the hdd and mount a smbfs share from the new server. I have tried to cp the files retaining the permissions and get a permission denied error, but if I copy without retaining the perms they copy just fine and do not work.
I have setup the new server with very close to the same version of all software. I have built a new instance of my service using all of the same users and passwords. I then backed up the new db and moved it over and attempted to copy the old db over. I did not copy it straight to the mysql folder as I did not share that folder out. Any other ideas to get this db recovered? What about getting my old server to run again just long enough to get a mysql dump?
I had problems with booting my PC after kernel updates several times. In the past I just reinstalled Ubuntu and after several tries with running Update Manager things were working again.This time I applied another recommended set of updates, including a kernel upgrade and got the usual "Kernel Panic - not syncing: VFS: Unable to mount root fs on unkown-block(0,0)"After booting from a LiveCD and running boot_info_script I rebooted again and this time I am getting tons of errors which seem to be generated while running grub. After a few minutes of 'error: syntax error' and 'error: Incorrect command' scrolling through my screen, grub gives my a grub> prompt... Not what I expected!Here is the output of the boot_info_script:
Code:
Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #1 for (,msdos1)/boot/grub. => Grub 2 is installed in the MBR of /dev/mapper/isw_cagifdjehe_Volume0 and
[code]...
I have two RAIDed disks (mirrored and showing OK during POST). The disk shows above as a6071eb5-6fc6-45b3-babb-c1a2156278d7. Ubuntu 10.10 installs fine from a LiveCD, but gets broken by kernel updates. I can see the original 1 TB disk when I start with the LiveCD and I can access all the files.
I was finally able to install Fedora 11 x64 after choosing to only install packages from the repository on the install DVD. Prior to that when I had chosen tio install from the default online repositories, the install itself failed with a Python exception ( see my other post ). Now, however, once I boot after the install I eventually receive a kernel panic message, and failure. The exact same thing happened with CentOS 5.3 x64 after a flawless install. So unless someone knows what might be going on I will assume that Fedore, Red hat, and offshoots for x64 bit systems are just not for me. I have been able to successfully install the latest Mandriva and SUSE x64 Linux distros so whatever Red Hat/Fedora has done just does not work on my system.
It appears that I have really messed up my machine. I was trying to get matlab working on FC 11 and I ran into libc.so.6 issues, so I put an older file libc.2.3.1.so in /lib/tls/ directory and created a symbolic link libc.so.6 to see if the application would work. Unfortunately at the same time the system did some updates and the system hung, so I ended up rebooting, but now it gets stuck at boot screen (after grub) with a kernel panic - not syncing: attempted to kill init.I just need a way to get to the directror /lib/tls and delete the link and the older .so file I threw in there. How do I get this accomplished. I cannot get even to a shell from the boot screen.
I got home today to find that my KDE login screen would not let me log in. It said the authentication process failed or something and I needed to terminate the screen lock process manually. So I go over to another virtual terminal and try to log in. As soon as I enter my user name, a bunch of errors come up and I am unable to log in. "This can't be good" I think to myself, and reboot.
I am greeted by this error upon booting: The error says that it says it cannot find /sbin/init. I loaded up a Ubuntu live CD and verified that /sbin/init is indeed present and all my other files still seem to be there. I tried booting into arch fallback on grub but that didn't work either. Midway through the day I SSHed my desktop from my phone and started it doing an upgrade. I was able to login.
Because I am using one of the new WD disks I am trying to aling my root partition with the real sectors, as described here: [url]
So I copied all files to a temp location, deleted my partition (/dev/sda3), recreated it a few cylinders later (same name) and copied the files to the newly created partition. I updated UUIDs in grub's configuration as suggested in this thread:[url]
But now it fails to boot with the following error:
Code:
I checked the filesystem on this partition and its fine. I tried to recreate the initramfs from Knoppix:
Code:
But it didn't change anything.
How can I either fix it or install a different kernel on this drive so I could boot into it and re-install my default kernels?
I am trying to install linux kernel manually, for this I had compiled linux-2.6.36 with minimum drivers and features. Note that ext2, ext3, jffs file system support and sd ata_piix drivers are set as inbuilt kernel modules.
I had two hard disk for my Intel x86 box sda and sdb. I have running linux on sdb from which I can access sda. sda has one partition sda1 as ext3 fs.
I had created following directories at sda1 root, bin, boot, etc, sbin
After compiling kernel, I had copied bzImage, system map files to boot folder. then using 'grub-install' I had installed grub on sda. after installation I edited grub.conf to setup kernel image.
grub.conf
Code:
After this I booted sda by changing HDD boot priorities,And wow I got grub prompt -- linux kernel booted but as soon as it tries to mount file system it dies with error,
Code:
I accept that I dont have binaries for init and no initialization stuff in /etc, but I think problem is I am not able to give correct rootfs to kernel.
I have an IBM server with a: "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 07)", meaning kernel module CONFIG_FUSION. This is the real hardware, not emulated by vmware. With recent kernels (2.6.30-35) the boot results in one of three situations:
1.) Most common: kernel panic, way too long stack dump to see anything useful but did have a terminal briefly to debug this and its just not being able to communicate with the scsi subsystems, google for errors found nothing of use.
2.) Sometimes: A hang waiting for the scsi disks after the controller fails to initialize properly, does not pass in a couple of hours.
3.) Very rarely: A long wait for the scsi controller to initialize but a succesful boot afterwards. ACPI subsystems are taking all CPU power. This is the state the machine is at now, working somewhat.
I can boot the machine with a Gentoo live install disk (kernel 2.6.29 I think) and under that version the controller initializes instantly and there are no problems whatsover using the disks so the hardware seems to be solid!
There were some MPT changes in 2.6.25rc5 so thats the latest kernel I've tried. The errors during boot change to a bit more verbose but thats about it. Unfortunately I don't have a serial terminal with me anymore so cannot capture these errors but it was something about an "unexpected doorbell" with 2.6.24 and nothing memorable with .25. Long story short tho, googling the errors provides no solutions and the little I find points to the controller being too slow to initialize somehow.
I'm running out of ideas here and the server is burning fans like theres no tomorrow (ACPI taking everything it can) so is my only option to downgrade to 2.6.29 (and downgrade udev, lvm2, so on and on and on)? I can't believe I'm the only one for whom this has been broken for several minor versions.
Kernels tried:
- 2.6.29-gentoo (live disk) - works, blazing fast initialization - 2.6.32-gentoo-r7 - random boot failures and successes - 2.6.34-gentoo-r1 - same - 2.6.35-rc5 vanilla - same, better errors, zero useful google results.
I'm not very keen on longshot attempts since it can take a couple of hours of panic-loops to get the machine booted up again and the server functions cannot be transferred to another machine. Also, I'm a bit hesitant to mention this, but the boot has only ever succeeded when a serial cable with something at the other end is connected. At first it was the testing serial terminal and now its just a connection to a UPS.
Attached: bootlog of succesfull-ish boot with 2.6.34 (had to cut some out to make the sizelimit):
Code:
Linux version 2.6.34-gentoo-r1 (root@livecd) (gcc version 4.3.4 (Gentoo Hardened 4.3.4 p1.1, pie-10.1.5) ) #1 SMP Sun Jul 18 18:54:48 EEST 2010 BIOS-provided physical RAM map: