Ever since my upgrade from 9.10 to 10.4, every time I reboot the system it does a full disk check. /var/log/boot.log tells me that fsck thinks that the file systems contain errors or that it wasn't cleanly unmounted. And yet, it doesn't seem to actually find errors, and a clean reboot starts another check (again with it thinking something is dirty). I dual-boot with Windows, and reboot from there with the same problem.Again, all of this is new with 10.4 and was not happening with 9.10.Is there a way to find out when/how/why the disks are not being unmounted cleanly?
I have/had a PC with several hard drives, and a mix of ubuntu and windows on multi boot.The old boot drive died screaming, and I need to start again. (But my data is safe! yay!)
Is there anything special about which drive can be the main drive to start booting from? Or to put it another way, can I install to any of the other 3 and expect it to work, or do I need to switch them around so a different drive is on the connections for the recently dead one?
These days I see the disk check that is popping up when my Ubuntu is booting up quite frequently. It says 'press C to cancel' but C (or Shift C or CTRL C or CTRL ALT C) does not have any effect. Pressing CTRL+ALT+DELETE reboots but again ends up in the vicious loop of disk check. How to bypass it? When I need to critically enter the desktop for an urgent pressing info waiting for 20 to 25 minutes disk check is kind of difficult.
I've been an Ubuntu 10.04 64 bit user for one year. My Ubuntu distro is installed on an ext2 file system. Occasionally, I experience disk checks during boot without any system freeze or power loss.Sometimes, once reboot, the system works fine but Evolution Mail requests a new account, as if it was used for the very first time.Does anyone know how to fix this problem?
Disk 0 (500GB): Windows Vista Disk 1 (1TB): Windows 7 Disk 2 (160GB): Ubuntu
My boot disk is Disk 0. Currently when I turn on the PC, GRUB loads from Disk 0. I can then choose either Ubuntu or Windows Loader. If I choose Windows Loader (also located on Disk 0), I can choose to load Windows Vista or Windows 7. I like this setup, but I would like to move the loaders (exactly as they are) to Disk 1 so that I can format Disk 0.
On a Sun Ultra10 333MHz, 512M, 9gB HDD. Booting Fedora-9. Silo v1.4.14 into kernel 2.6.27 64bit (vmlinuz-2.6.27.12-78.2.9fc9.sparc64). This is a brand-new installation.Although it's running on a Sparc it makes v.litte difference so far as this bootprocess, teh way Linux runs, where everything is - is concerned. That's why I've cross posted this query here.Booting merrily, in the interactive startup section just past "Starting udev", "Setting hostname", at "Checking filesystems" I get:
/: clean, 155284/557056 files, 920932/2225412 blocks fsck.ext2: Device or resource busy while trying to open /dev/sda Filesystem mounted or opened exclusively by another program? [FAILED] *** An error occured during the filesystem check *** Dropping you to a shell, the system will reboot *** when you leave the shell *** Warning --SELinux is active
I've installed Windows 7 onto one hard drive, and then installed Ubuntu 9.10 onto a second hard drive. The installations seemed to go fine, and I can boot into Ubuntu from the GRUB menu. However when I try to boot into Windows from the GRUB menu, I get a message saying "error: no such device: 446e94786e946488".
I get the pinkish Ubuntu screen and a message such as "checking disk 1 of 4". I assume that it is doing an fsck. However, the time it takes does not seem to relate to the time it takes if I do a manual fsck (almost instantaneous) or fsck -c (several minutes to half an hour depending on the drive). I also wonder what is counts as a "disk". I have in the system:
I recently tried installing Lucid x86 on my system beside Windows 7 and managed to screw it up.
My disk setup is this;
Disk A = 3 partitions (1st partition=Windows 2&3 partitions=Data) Disk B = 1 partition Disk C = 3 partitions (1st partition=Data 2&3 partitions=Ubuntu & Swap)
Disk A = SATA and internal Disk B = SATA to USB external Disk C = SATA to USB external
I want to install Lucid on the 2nd partition on Disk C. And dual boot it with Windows on Disk A.
During Lucid setup i specified the partition for installation (C2) and asked for GRUB to install on Disk A (no partition specified) so GRUB is always used as the dual-boot manager even if the Lucid disk (Disk C) is ejected. Once installed and rebooted i was taken to the GRUB rescue prompt as no installation drive could be found (a long string of numbers (looked like a Disk ID number???) was also shown). Obvviiusly, i could not access either OS on my system at this point. I had my W7 DVD handy so it was just a case of recovering the windows boot manager and i could use my PC but how do i go about installing Lucid with this setup? Should i specify a partition for GRUB to install to? I have a hunch this is where i am going wrong but am too scared to try again and potentially balls things up.
Does anyone know about any usb ssd disks which work with Linux, and which Linux can boot from? If the disk also have a sata connector it will be even better.
I have servers which contain SATA disks and SAS disks. I was testing the speed of writing on these servers and I recognized that SAS 10.000 disks much more slowly than the SATA 7200. What do you think about this slowness? What are the reasons of this slowness?
I am giving the below rates (values) which I took from my test (from my comparisons between SAS 10.000 and SATA 7200);
dd if=/dev/zero of=bigfile.txt bs=1024 count=1000000 when this comment was run in SAS disk server, I took this output(10.000 rpm)
(a new server,2 CPU 8 core and 8 gb ram)
1000000+0 records in 1000000+0 records out 1024000000 bytes (1.0 GB) copied, 12.9662 s, 79.0 MB/s (I have not used this server yet) (hw raid1)
After upgrade to 10.04, my disks are randomly named (sda, sdb, sdc) at each boot. My drive labeled "XP" is sometimes named "sdb" and sometimes "sdc", while my other drive "DATA" is respectively "sdc" or "sdb". This wasn't the case before upgrade with KUbuntu 9.10.
Due to this random naming, my auto-mount in fstab often fail at boot time !
Any solution for this (not found here by myself) ?
Is this linked to Grub troubles reported many times here ?
I have a fully operational PXE boot server, the client boots up and begins the setup process however, fails to detect the hard disk, I have tried with ubuntu 8.10, 9.10 and 10.10 and none of them will see my hard disk, I boot to the cd and it sees the hard disks with no problem, so apparently the pxe boot server isnt serving up the neccesary drivers or something to detect my hard disks properly. They are just IDE drives and like I said, regular cd install detects my drives just fine.So if anyone here has any information that may help shed some light on this issue I would be so grateful
I have no hard drives in my computer, so I have been trying to boot Ubuntu 11.04 from an 8GB usb flash drive. Is this possible? So far the best result i have gotten is it will sit on the loading screen for a while then dump. I was only able to get the last little bit which reads mount. mounting /dev on /root/dev failed: no such file or directory. mounting /sys on /root/sys filed: no such file or directory. mounting /proc on /root/proc failed: no such file or dirctory. target file system doesn't have requested /sbin/init
I have installed xp at the main hdd. It has 3 partitions. Then I installed Kubuntu 10.04 on the slave hdd. When I boot, it doesn't recognize kubuntu. When I searched at My PC in XP, didn't recognized the slave hdd. I switched the hdd (slave to master and viceversa) and it didn't go well either.
Dell 600SC running an Adaptec 39160 dual channel SCSI controller which has 2 disks connected to it. The machine also has 2 IDE drives connected to it. The boot order of the disks is set to the SCSI disks as the first in boot order (after CD).
I am trying to set it to maximize performance from the SCSI config so I have XP on the first SCSI and I set up Ub 9.1 on the second SCSI in a dual-boot configuration.
In this set up the machine when rebooting goes straight to XP (on the first SCSI) and does not even see the Ub installation. The installation went fine and no complaints. On the same machine if I just had Ub on the first SCSI - machine boots fine (albeit after a long pause looking for the bootloader).
So with XP on the first disk (which I need to - to have XP) the Ub bootloader does not seem to set the right params to be able to boot.
Again this is with 9.1. Not trying 10.04 as with 10.04 I don't even get to boot even with standalone Ub (with no XP). However it installs fine but does not find bootloader in 10.4, so we will keep to the 9.1 for now. I am however open to working with 10.04 if there is a solution in dual-boot with XP in my config.
So again 9.1 installs fine with XP on 1st SCSI disk, an ub 9.1 on 2nd SCSI disk, but then bootloader does not get activated and machine goes straight to XP.
i have got a very strange boot problem. But first: I have openSUSE 11.4 with kde installed. I have the amd64 dual core cpu and 2 hard disks. I was able to boot from both of those disks (on the second disk I have openSUSE 11.2 in case something goes wrong with the first disk). Then I decided to install openSUSE 11.4 from DVD to a usb key (just like I would to a hard disk). I succeeded. I did not involve any partition of the hard disks in this install. But now I can not boot anymore from any of my both hard disks although bios finds them it did before. After bios I get the following message: Loading stage 1.5 error 21.
Error 21 means: Selected disk does not exist. This error is returned if the device part of a device- or full file name refers to a disk or BIOS device that is not present or not recognized by the BIOS in the system But I am still able to boot from usb key. I have even modified the menu.lst from the usb key to boot openSUSE 11.4 from the first hard disk. This works fine. I have also tried to install grub again on my first hard disk with grub.install.unsupported and with yast2. But installation stops with an error message like "hard disk not found by bios".
I'm just curious - why do all linux distros (all I've seen) run their periodic disk checks during boot? I mean, I understand that a disk should be checked now and then, but why does the system do it during boot, when I'm waiting for it to load, instead of checking them during shutdown, when (most probably) user doesn't need the computer anymore.
upon installing 4 2TB drives, my server will not boot. I have tried booting from a slackware 11 dvd and passing these boot paramaters:
huge26.s root=/dev/sda1 noinitrd ro in addition to just trying to boot from the DVD using the huge26.s kernel. the kernel starts to load and says "Ready." Then sits there with a flashing cursor... The problem only exists with the new 2TB drives installed. I never had any problems when I had 750GB drives installed. Also, everything works fine if I boot from the DVD using "huge26.s root=/dev/sda1 noinitrd ro" as boot paramaters, and insert the 4 2TB drives (hot add) after the system starts booting.
I have also tried booting from a backtrack 3 cd but experience the same problem (boot halt after loading initrd)
I have a PC with 4 harddisks and one ssd drive, presently PC boots from the 1st harddisk and other harddisks (sometimes 1, 2 or 3, depends upon the requirement) are used for the storage. Now i want to boot the PC from SSD and use the other harddisks for the storage only. My problem is that when system boots it takes 1st harddisk as sda and SSD as sdb, if i am using only one harddisk, and if i use 2 harddisk it takes sdc as SSD. So i am not able to give fix boot point in menu.lst file, if i wish to use root filesystem from SSD.I am using 2.6.33.7 kernel and grub bootloader. I have tried using initrd with udev but not able to include and start udev properly in initrd. I am trying to boot from UUID or LABEL, but no success. Am i missing something in kernel to get the UUID or LABEL.
One of my disks in my computer crashed, it was the one containing /boot and some data partitions. The other system and /home partitions were on a second disk, which is ok.
I was wondering, can I create a new /boot partition, and keep on using the rest of the system? Can I somehow do it with a chroot from a live/installer disk, run grub, and use my system again? I have another disk which I can put in the system, but there is even an unused partition on the disk which is ok (but it is rather big for /boot).
Is it possible to create a boot CD to boot external volumes on an Apple iMac 7.1 (which has an older firmware version and cannot boot external disks, unlike the MacBook Pro 5.1 which can do it, at least with grub-legacy which is all I'll ever use until EFI boot becomes available). There is some promising stuff on www.pendrivelinux.com, and I'll try it, but the instructions are for Windows, and I am not sure how to translate the menu.lst entry to linux (I suppose it would have to be entered in the "automagic" section). Of course I don't want to create a bootable flash drive but to use my external volumes that already boot on the MacBook Pro without altering them, except for installing the ATI video driver (but I have no problem booting in low graphics mode).
Until karmic there was a trick to make the iMac mistake the external volume for an internal one (the root partition had to have the same UUID as the internal root partition), but this does not seem to work for lucid. Anyway this UUID trick is dirty and causes problems when you want to edit the internal partition (which is the point of the external boot - you get a customized maintenance environment that boots much faster than the CD).
I have two internal harddisk. Harddisk 1 has ubuntu, fedora installed and harddisk 2 has ubuntu installed. I normally connect either one, and use it. How can i always keep connect both harddisks, and at the start, select from which harddisk to boot? Or it's not possible?
Having installed 9,10 onto a laptop my cherubic daughter swicthed off the power (no battery) and upon restarting i am faced with "Starting Init crypto disks... OK) and there it stops!! I had hoped that I could go to recovery mode and fix it but am faced with the same stalling point. I see others are unresolved in this.
Nothing too major here but today I had a few programs open and was doing a bunch of things and suddenly the system froze.
I am on 10.04 LTS -
Are there checks that I can do to see if everything is ok?
I had to turn the power off and re-booted and everything is fine, or rather, seems to be 100% fine - but more out of curiosity Id like to see if there are some checks that I can do.
I think that ubuntu creates a log of activity if I am not mistaken?
I was just wondering why Ubuntu is always checking my discs for errors. This happens every few times i turn the computer off and back on. Maybe every 2 or 3 times. Is this just ubuntu checking the discs or something to worry about?
I have my Media Center running Ubuntu 10.04, and it's annoying to have that "Your disk drives are being checked for errors" message every so often. And ofcourse, as Murphy's law states, it usually happens when I'm in a hurry to quickly watch something.
Ofcourse, I can press 'C' (cancel) all the time. But I guess Ubuntu set up this file check interval for a reason, right? I was wondering if it's save to change the interval so it's less often. Or is it easy to configure the check to occur at SHUTDOWN? That's when most people don't care what the computer does anymore.
Also, although it's a pretty fresh install, any Ubuntu on my machine has never ever ever ever worked flawlessly and neither does this one. More often then not, on shutdown, the computer doesn't shut down but just sits there with a black screen or with the ubuntu logo. So I just power it off. Does this scenario make it unwise to tone down on the number of file checks?