I get the pinkish Ubuntu screen and a message such as "checking disk 1 of 4". I assume that it is doing an fsck. However, the time it takes does not seem to relate to the time it takes if I do a manual fsck (almost instantaneous) or fsck -c (several minutes to half an hour depending on the drive). I also wonder what is counts as a "disk". I have in the system:
I have/had a PC with several hard drives, and a mix of ubuntu and windows on multi boot.The old boot drive died screaming, and I need to start again. (But my data is safe! yay!)
Is there anything special about which drive can be the main drive to start booting from? Or to put it another way, can I install to any of the other 3 and expect it to work, or do I need to switch them around so a different drive is on the connections for the recently dead one?
I don't think there is a way of doing this with date or clock commands. But maybe they are writing to some file and I can take a look at the file's modification time. dmesg and /var/log/messages show nothing relevant.
I have installed thunderbird sometime back on Ubuntu.I want to know the date and time of installation. How can i get this information. I tried "stat thunderbird", but it did not give the installation time
I want to monitor an application lets say that it will be apache2 to see how many in real-time it takes network resources such as upload/download per second how can i do that in linux (cmd not gui) ? I know it's possible because i can see this in windows in my nod32 firewall monitoring.
Disk 0 (500GB): Windows Vista Disk 1 (1TB): Windows 7 Disk 2 (160GB): Ubuntu
My boot disk is Disk 0. Currently when I turn on the PC, GRUB loads from Disk 0. I can then choose either Ubuntu or Windows Loader. If I choose Windows Loader (also located on Disk 0), I can choose to load Windows Vista or Windows 7. I like this setup, but I would like to move the loaders (exactly as they are) to Disk 1 so that I can format Disk 0.
Sometimes at startup I get this message "Checking disk 1 of 1". Does that mean it's checking all partitions on the hd? After a bad shutdown there is no prompt for fsck to run and the system just boots up. In fstab I have both options set to "1" for the partition Ubuntu is on, all others set to "0". Any ideas on both?
What would prompt this? i shut off teh computer and turned it back on , didnt press any buttons and it started checking my drives, so i pressed C and cancelled it.
Today when I went to boot up my computer, it hung at "Checking battery state...". I have never had any problems before this, and have been using 10.10 for several months. The only change I made that I can think of was installing the Wacom Control Panel. I have no idea what to do, and really, really don't want to reinstall Ubuntu unless there's no other option.
Ever since my upgrade from 9.10 to 10.4, every time I reboot the system it does a full disk check. /var/log/boot.log tells me that fsck thinks that the file systems contain errors or that it wasn't cleanly unmounted. And yet, it doesn't seem to actually find errors, and a clean reboot starts another check (again with it thinking something is dirty). I dual-boot with Windows, and reboot from there with the same problem.Again, all of this is new with 10.4 and was not happening with 9.10.Is there a way to find out when/how/why the disks are not being unmounted cleanly?
I have switched recently from Ubuntu to Debian and overall I am enjoying it. However I was just wondering, does Debian, like Ubuntu check the filesystem at boot periodically or if damaged, because it is doing neither in my case? How do I get it to do this
When booting Fedora 11, my system hangs for a very long time on starting udev. Sometimes I get an I/O error. However, my hardware is fine. I do eventually get in to the system.
I've installed Windows 7 onto one hard drive, and then installed Ubuntu 9.10 onto a second hard drive. The installations seemed to go fine, and I can boot into Ubuntu from the GRUB menu. However when I try to boot into Windows from the GRUB menu, I get a message saying "error: no such device: 446e94786e946488".
I have a Nvidia graphics card, and an onboard card. I wanted to use both concurrently. At first I was only getting signal from the Nvidia one, but I want both. I changed the settings in my BIOS to Onboard, but it is now only coming from my onboard one. I then installed the Nvidia drivers from Additional Drivers, and then boot hung on checking battery state. I had to remove info from xorg.conf just to boot.
I recently tried installing Lucid x86 on my system beside Windows 7 and managed to screw it up.
My disk setup is this;
Disk A = 3 partitions (1st partition=Windows 2&3 partitions=Data) Disk B = 1 partition Disk C = 3 partitions (1st partition=Data 2&3 partitions=Ubuntu & Swap)
Disk A = SATA and internal Disk B = SATA to USB external Disk C = SATA to USB external
I want to install Lucid on the 2nd partition on Disk C. And dual boot it with Windows on Disk A.
During Lucid setup i specified the partition for installation (C2) and asked for GRUB to install on Disk A (no partition specified) so GRUB is always used as the dual-boot manager even if the Lucid disk (Disk C) is ejected. Once installed and rebooted i was taken to the GRUB rescue prompt as no installation drive could be found (a long string of numbers (looked like a Disk ID number???) was also shown). Obvviiusly, i could not access either OS on my system at this point. I had my W7 DVD handy so it was just a case of recovering the windows boot manager and i could use my PC but how do i go about installing Lucid with this setup? Should i specify a partition for GRUB to install to? I have a hunch this is where i am going wrong but am too scared to try again and potentially balls things up.
Does anyone know about any usb ssd disks which work with Linux, and which Linux can boot from? If the disk also have a sata connector it will be even better.
I have servers which contain SATA disks and SAS disks. I was testing the speed of writing on these servers and I recognized that SAS 10.000 disks much more slowly than the SATA 7200. What do you think about this slowness? What are the reasons of this slowness?
I am giving the below rates (values) which I took from my test (from my comparisons between SAS 10.000 and SATA 7200);
dd if=/dev/zero of=bigfile.txt bs=1024 count=1000000 when this comment was run in SAS disk server, I took this output(10.000 rpm)
(a new server,2 CPU 8 core and 8 gb ram)
1000000+0 records in 1000000+0 records out 1024000000 bytes (1.0 GB) copied, 12.9662 s, 79.0 MB/s (I have not used this server yet) (hw raid1)
After upgrade to 10.04, my disks are randomly named (sda, sdb, sdc) at each boot. My drive labeled "XP" is sometimes named "sdb" and sometimes "sdc", while my other drive "DATA" is respectively "sdc" or "sdb". This wasn't the case before upgrade with KUbuntu 9.10.
Due to this random naming, my auto-mount in fstab often fail at boot time !
Any solution for this (not found here by myself) ?
Is this linked to Grub troubles reported many times here ?
I have a fully operational PXE boot server, the client boots up and begins the setup process however, fails to detect the hard disk, I have tried with ubuntu 8.10, 9.10 and 10.10 and none of them will see my hard disk, I boot to the cd and it sees the hard disks with no problem, so apparently the pxe boot server isnt serving up the neccesary drivers or something to detect my hard disks properly. They are just IDE drives and like I said, regular cd install detects my drives just fine.So if anyone here has any information that may help shed some light on this issue I would be so grateful
I have no hard drives in my computer, so I have been trying to boot Ubuntu 11.04 from an 8GB usb flash drive. Is this possible? So far the best result i have gotten is it will sit on the loading screen for a while then dump. I was only able to get the last little bit which reads mount. mounting /dev on /root/dev failed: no such file or directory. mounting /sys on /root/sys filed: no such file or directory. mounting /proc on /root/proc failed: no such file or dirctory. target file system doesn't have requested /sbin/init
I have installed xp at the main hdd. It has 3 partitions. Then I installed Kubuntu 10.04 on the slave hdd. When I boot, it doesn't recognize kubuntu. When I searched at My PC in XP, didn't recognized the slave hdd. I switched the hdd (slave to master and viceversa) and it didn't go well either.
Dell 600SC running an Adaptec 39160 dual channel SCSI controller which has 2 disks connected to it. The machine also has 2 IDE drives connected to it. The boot order of the disks is set to the SCSI disks as the first in boot order (after CD).
I am trying to set it to maximize performance from the SCSI config so I have XP on the first SCSI and I set up Ub 9.1 on the second SCSI in a dual-boot configuration.
In this set up the machine when rebooting goes straight to XP (on the first SCSI) and does not even see the Ub installation. The installation went fine and no complaints. On the same machine if I just had Ub on the first SCSI - machine boots fine (albeit after a long pause looking for the bootloader).
So with XP on the first disk (which I need to - to have XP) the Ub bootloader does not seem to set the right params to be able to boot.
Again this is with 9.1. Not trying 10.04 as with 10.04 I don't even get to boot even with standalone Ub (with no XP). However it installs fine but does not find bootloader in 10.4, so we will keep to the 9.1 for now. I am however open to working with 10.04 if there is a solution in dual-boot with XP in my config.
So again 9.1 installs fine with XP on 1st SCSI disk, an ub 9.1 on 2nd SCSI disk, but then bootloader does not get activated and machine goes straight to XP.
i have got a very strange boot problem. But first: I have openSUSE 11.4 with kde installed. I have the amd64 dual core cpu and 2 hard disks. I was able to boot from both of those disks (on the second disk I have openSUSE 11.2 in case something goes wrong with the first disk). Then I decided to install openSUSE 11.4 from DVD to a usb key (just like I would to a hard disk). I succeeded. I did not involve any partition of the hard disks in this install. But now I can not boot anymore from any of my both hard disks although bios finds them it did before. After bios I get the following message: Loading stage 1.5 error 21.
Error 21 means: Selected disk does not exist. This error is returned if the device part of a device- or full file name refers to a disk or BIOS device that is not present or not recognized by the BIOS in the system But I am still able to boot from usb key. I have even modified the menu.lst from the usb key to boot openSUSE 11.4 from the first hard disk. This works fine. I have also tried to install grub again on my first hard disk with grub.install.unsupported and with yast2. But installation stops with an error message like "hard disk not found by bios".
I'm just curious - why do all linux distros (all I've seen) run their periodic disk checks during boot? I mean, I understand that a disk should be checked now and then, but why does the system do it during boot, when I'm waiting for it to load, instead of checking them during shutdown, when (most probably) user doesn't need the computer anymore.
upon installing 4 2TB drives, my server will not boot. I have tried booting from a slackware 11 dvd and passing these boot paramaters:
huge26.s root=/dev/sda1 noinitrd ro in addition to just trying to boot from the DVD using the huge26.s kernel. the kernel starts to load and says "Ready." Then sits there with a flashing cursor... The problem only exists with the new 2TB drives installed. I never had any problems when I had 750GB drives installed. Also, everything works fine if I boot from the DVD using "huge26.s root=/dev/sda1 noinitrd ro" as boot paramaters, and insert the 4 2TB drives (hot add) after the system starts booting.
I have also tried booting from a backtrack 3 cd but experience the same problem (boot halt after loading initrd)