Just finished installing F14 with RAID1 setup for 2 hdd (SATA''s). Entire drives are mirrored, including SWAP. As I had done in the past, I was planning on installing grub on MBR of 2nd hdd. In prep for this I did the following to locate the grub setup files:
grub> find /grub/stage1
find /grub/stage1
(hd0,1)
[code]....
I was surprised, expected to get (hd0,0) & (hd1,0), not (hd0,1) & (hd1,1)
running "fdisk -l" I get:
Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
[code]....
MBR is the first 512 bytes of the drive. Each partition has a boot sector. In my case grub stage 1 is on 2nd partition of sda & 2nd partition of sdb. What i dont understand is how grub stage 1 can be on sda2 & sdb2, since I am assuming that sda1 & sdb1 would be the first partitions of the drives & therefore contain the MBR's. Maybe this might be because sda1 & sdb1 are SWAP partitions?
I have two 80 GB IDE hard disk. I have create raid1 partition in both drive using [URL] ink. raid is working fine. But i have copy some data on one hard disk (md0) but this data is not autometically copy in second hard disk(md1). I want when data is write on one hard disk, this data autometically write in second hard disk.
Trouble shooting a grub install after moving an resizing partitions and install winxp along side a stable Ubuntu 8.04 system.I found that all directions I followed do their thing, however, grub keeps creating menu.lst on the ubuntu ramdisk that is created from the liveCD. I keep thinking it has found the real /boot/grub directory but that is never the one updated.
I have xp/fc8 on an older ide drive and just installed a new sata 1T and planned to put fc10 on it but in the process I killed my fc8 installation. I told the installer that the other disks were off limits but it was somewhat confusing at the bootloader page. So, I suspect that I told it boot off the fc8 disk. If that is the case is there a way to restore the fc8 install by somehow rescuing the /boot partition on the fc8 disk?
I have a hypothetical situation in which I installed my operating system using a RAID1 mirror. At some point I decided that this setup was overkill, my machine isn't system critical, I value doubling my storage space more than speedy recovery, I'm doing routine backups, etc...
Short of backing up my system volume and repartitioning, or otherwise starting over, is there a way I can reconfigure my RAID1 array to only expect one disk so that mdadm no longer reports a Degraded state?
I recently had an issue where one of my RAID1 configured drives died on a Debian box. By die, I mean after post, I am presented with a black screen and flashing cursor, nothing more. I booted into a knoppix shell and did:
mdadm --assemble /dev/md0 --run -u <UUID of the only working drive> /dev/sda1
I could then 'mount /dev/sda1 /mnt/' and see all the contents as I should be able to. However, I cannot boot from this device by itself for some reason. I have reinstalled grub 'grub-install --recheck --rootdirectory=/mnt /dev/sda1' #something like that, can't remember the exact --rootdirectory command without the switch. I restarted, with the failed drive unplugged, and again, I'm faced with the black screen, flashing cursor.
im on 10.10(desktop) and mdadm was v2.8.1 from 2008, very out of date so i tried 3.2.1 -> no change. mdadm raid1 read speeds are the same as single disk. note i used the tests in the disk utility benchmarking tool at first --these showed raid 5 atleast to be much better but when i tried dd reads raid5 dropped off with larger data to almost the same (slow) speed as raid1. compare:
[code]....
using two partitions will be enough to show raid1 performs at single disk speed. I dont really want to use a 4 disk raid0 just to get the read speed i should be able to get with raid1 as i dont really care about the size loss. I would of course use raid10 but i have found this suffers from the same problem (achieve same read speed as 2 disk stripe). So whilst im shocked others aren't reporting this, unless there is some obscure reason why my system would give these results i think raid1 in not behaving as it should.
I've read many of the postings on ICH10R and grub but none seem to give me the info I need. Here's the situation: I've got an existing server on which I was running my RAID1 pair boot/root drive on an LSI based RAID chip; however there are system design issues I won't bore you with that mean I need to shift this RAID pair to the fakeraid (which happens to most reliably come up sda, etc). So far I've been able to configure the fakeraid pair as 'Adaptec' and build the RAID1 mirror with new drives; it shows up just fine in the BIOS where I want it.
Using a pre-prepared 'rescue' disk with lots of space, I dd'd the partitions from the old RAID device; then I rewired things, rebooted, fired up dmraid -ay and got the /dev/mapper/ddf1_SYS device. Using cfdisk, I set up three extended partitions to match the ones on the old RAID; mounted them; loopback mounted the images of the old partitions; then used rsync -aHAX to dup the system and home to the new RAID1 partitions. I then edited the /etc/fstab to change the UUID's; likewise the grub/menu.list (This is an older system that does not have the horror that is grub2 installed) I've taken a look at the existing initrd and believe it is all set up to deal with dmraid at boot. So that leaves only the grub install. Paranoid that I am, I tried to deal with this:
dmraid -ay mount /dev/mapper/ddf1_SYS5 /newsys cd /newsys
[code]....
and I get messages about 'does not have any corresponding BIOS drive'. I tried editing grub/device.conf, tried --recheck and any thing else I could think of, to no avail. I have not tried dd'ing an mbr to sector 0 yet as I am not really sure whether that will kill info set up by the fakeraid in the BIOS. I might also add that the two constituent drives show up as /dev/sda and /dev/sdb and trying to use either of those directly results in the same error messages from grub. Obviously this sort of thing is in the category of 'kids don't try this at home', but I have more than once manually put a unix disk together one file at a time, so much of the magic is not new to me.
I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:
12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen 12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0 12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)
Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.
After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.
[Code]...
I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.
We have had a hardisk crash in our RAID1 webhosting server running CentOS5 and Plesk. We first realized something was wrong when our main site didn't load but showed MySQL errors. We then found out that the system was in read-only state. Something that also happened the day before yesterday, but we could fix with a FSCK. Then the system worked well til around 18 hours later when it crashed with the same sympoms. So, we rebooted the server and wanted to do a filesystem check again. But the HDD wouldnt even load. It was gone. Unfortunatelly it was not realized that the second disk in the system was also not working any more for some time now. Fortunatelly we had our main site backed up externally though. So we could re-install a fresh box and mounted the two drives to the system. We checked the harddisk. One is practically empty (the older one), the other has almost only files in 'lost + found' but these are all "numbered", no real filenames or so.
I have a VIA Epia M 5000 system with 2 western digital 1TB NAS SATA drives connected through SATA<->IDE adapter. Everything installs and writes as expected except... grub. It never boots, after a message 'Grub loading' I always get 'error: no such disk'. I've tried numerous times and has been attemping to fix the issue for the past 2 days.
/dev/sda 0.999TB 0xfd linux raid autodetect partition 1GB 0xfd linux raid autodetect, logical partition
/dev/sdb exactly the same 0.999TB 0xfd linux raid autodetect partition 1GB 0xfd linux raid autodetect
/dev/md0 RAID1 of /dev/sda1 and /dev/sdb1 marked as ext4, boot point /
/dev/md1 RAID1 of /dev/sda5 and /dev/sdb5 marked as swap
update-grub2 in rescue mode generates grub.cfg with SET root=/mduuid/UUID_OF_SDA1 then after that there's search --no-floppy etc --set root=/mduuid/UUID_OF_MD0
I'm writing this from memory but simply the two uuids are different. Is this correct? I get those UUIDs to compare from blkid. All partitions are marked as bootable. grub-install /dev/sda and grub-install /dev/sdb produces no errors. grub-install /dev/md0 does not work, complains about superblocks or something similar.
Grub.cfg file contains insmod raid mdraid1x and similar lines, so that should be ok. Grub drops to rescue mode with message error: no such disk. Not device, but disk. Google finds many results for 'no such device' error, but I am not getting that error. 'ls' produces (hd0) (hd0,msdos1) (hd1) (hd1,msdos1) ls ANY_VALID_PATH produces empty newline being printed, nothing more.
setting prefixes manually does not work, with error message 'error: file not found'. ls (hd0,msdos1)/boot/grub also produces empty line being printed. Rescue CD and auto-assemble of md0 and md1 works, the files are there, everything okay, except grub.
Followed steps 2-5 and purged/reinstalled grub now it boots as it should, NO idea where it was messed up.[URL].. I had 9.10 running in raid1 and upgraded my hardware (cpu, mb, memory etc) and wanted to do a fresh install of 10.04 to get updated. After following the various guides online such as [URL]...It begins to load grub and drops to a "grub> " shell. Which I have to do the following to get it to boot.
Code: set root=(md1) linux /vmlinuz root=/dev/md1 ro initrd /initrd.img boot
Then it boots up normally and I can use it like any other desktop. I've been over my grub.cfg and /etc/defaut/grub files and cannot find the issue. At this point I'm wondering if the fact it's a raid1 setup is keeping grub from finding it's files.
I burned a Fedora ISO onto a CD and I love it. Now, I want to add it as a dual-boot to my hard disk. I divided Drive C into two (formatted) partitions one labeled Linux.During an attemped installation, I was able to find (thru "edit") the partition called "Linux." But one of the next questions threw me: it asked for a "mount point."I have no idea what that means (and yes I have readthe guide onthis site).
ACTUALLY... what I need to do is simply install into a partition I already set up.I don't find an option for that within the Fedora installation menu.I have 20 years of experience with Windows and try to keep up with everything, but Linux is totally new to me, so I don't understand the terminology.I also have been trying to find how to create a dual-boot situation from the hard disk, where XP is still my default (for now, at least).
I'm a subscriber of a Linux magazine who sends me 2 dvds of Linux distros each month. I wanna try some of those just for some time pass. The issue is that out of 52 GB partition on which Fedora 11 is installed, 42 GB is free. I want to have around 10 GB space from that 42 GB so that I can install CentOS 5.3. how shall I partition my disk?
I am on a windows 7 system trying to install linux fedora 15.
I am using Fedora 15 live image which I burned onto a DVD and booted. According to instructions I've found in a tutorial I go into the system tools and choose install to harddrive. I have previously shrunk the windows system drive to free up approx 200 GB of unallocated space. I did this through the control panel >> administrative tools >> computer management >> windows disk manager.
While I try to install fedora on the harddrive I run into two problems. 1. I can't install it because it says "no free space available to create partition". it doesn't matter if I choose the auto partition option or the custom partition option.
Choosing the custom partitioning option I don't know what partitions I need to create. Terminology such as LVM and PV are all new to me.
The second problem is that I am after some random time ( it occurs in different time intervals) forced to re-login as a live user which kills the installation program and forces me to re-start the installation process
I am a novice in Linux but due to my academic requirement I had to install Linux (Fedora 8). I have 2 hard disk's (80GB & 20GB), on the first HD which is 80GB I have Windows XP and the other one I partitioned and installed Linux. Now the first problem is that, whenever I start my PC I get a error which says "GRUB hard disk error", however when I restart the machine it's fine and gives me the boot options.
Secondly, the HD containing windows was affected by virus so I had to format & reinstall XP. Strangely after that I am not getting any boot options and it's like windows is the only 1 OS running. But on windows the partition on which Linux is installed in intact. So I assume something is deleted maybe the Linux boot file.
I cloned F14 with Clonezilla from 80GB to 320GB hdd(both sata disks), and then resized the partitions with GParted.But I can not boot into fedora on the new/bigger disk, it stops and the display writes "Loading stage 1.5" if I remember corectly,I tried to fix it with the live cd but with no efect.
Then i found Super Grub Disk live CD, and with that i tried to use their fix, which was the same as with the Fedora live cd i tried before, again no efect.Then i played around with Super Grub, and found the option to boot GNU/Linux indirectly, and with that metod i got results, found my menu.lst file and chose the kernel i wanted and it boots into desktop.
But i would need a more permanent solution, because now i allways have to use the same procedure with Super Grub Disk CD to boot into my Fedora 14.
I have got a hold of a extra hdd along with a hdd enclosure. I have tried looking for information on how to install linux on to one but haven't been completely successful on my search. So I turn to all of you. I was also wondering if its possible to have it were I can use it on multiple computers so I can use it for computer repair.
Is it possible to install GRUB in the MBR of the only bootable disk in the system, but load configuration and images from another disk?Basically I want to install GRUB on /dev/sda, but menu and images will be under /dev/sdb2.Note: /dev/sdb is not bootable.
I'm just slightly confused here, but... what the? Why does installing grub-doc remove BOTH grub-pc, and grub-common? So basically it seems like by installing grub-doc, I have uninstalled grub totally (yes, it is still there as the bootloader, but i have no way of updating it now!) from my system. What's the conflict between grub-doc and grub-pc, such that grub-pc has to be removed?
I have windows and I installed fedora 12 on a separate partition.
However, I had a problem with my windows XP SP 3 and had to install windows. Which I did on my C drive. However, when I re-boot I on longer get the GRUB loader displayed so cannot boot into fedora.
I currently have two hard drives. One of them is a dedicated Windows 7 x64 500gb drive. The other, is a 152gb drive that I plan to install Fedora in. When I booted from the live CD, I chose the option to use the entire 152gb disk and to boot from the 152gb disk. I'm not sure if I did something wrong there, but now, GRUB won't detect Windows 7 or Fedora 12.
I would also like to leave the Windows 7 hdd untouched.
I previously had a non-working OpenSUSE installation on the 152gb drive
how do i correct my boot loader after installing Win7 on the free space available. i still have my fedora 13 partition intact. but of course, fedora option is not available. how do i re-install grub using the rescue dvd? tried alrerady.
How do I reconfigure grub when adding a disk to a machine where both disks have their own MBRs? I have two volumes:Disk 1 - actually mirrored RAID-1 drives managed by ICH9R on the motherboard Disk 2 - a single drive managed by ICH9R on the motherboard, but without RAID. Disk 1 is the "old" disk containing WinXP on the first partition. The MBR of Disk 1 was created by Windows. Disk 2 was built on the machine while Disk1 was unplugged. Disk2 has Win7 on /dev/sda1 and Fedora 12 on /dev/sda7. Obviously, Disk 2 has grub installed on its own MBR.
When I plug-in both Disk 1 and Disk 2 at the same time, I would like to reconfigure grub so that it gives me the option to switch between WinXP on Disk 1, Fedora on Disk 2 and Win7 on Disk 2. (I may also want to install Ubuntu on another partition of Disk 1, but that's a separate issue.) The problem is that when I plug in Disk 1, Disk 1 becomes /dev/dm-0 and Disk 2 becomes /dev/sdc (instead of /dev/sda as when I installed it). (I don't think I can switch this order because I'm worried that Windows will become confused.) So, how do I keep all partitions the same and get them all to work from grub? On which MBR will I need to install grub? How do I configure it to see all 3-4 of my operating systems? Do I fix grub from the Fedora LiveCD?
since FC1 i install Fedora on a seperate HD on my PC without installing GRUB. I always made a bootdisk and since FC4 a boot CD with mkbootdisk. If i try to make a bootdisk via mkbootdisk --iso --device/tmp/bootdisk.$(uname -r).iso $(uname -r) under Fedora 12 i get a kernel panic if i try to start from this disc:Kernel panic: not syncing: VFS: Unable to mount root fs on unknown-block(0,0)PID: 1, comm: swapper Not tainted 2.6.31.5-127.fc12.x86_64 #1Call Trace: then there a some hex values and the message that i should choose a proper root partition. if i start with options linux root=correct_bootpartition i get the same kernel panic with the same message.I found out that it must have something to do with the kernel*.img file. In earlier versions the of fedora it has been copied to the iso image. That's not done anymore.Iso size F11: ~ 8.8mb; Iso size F12: ~3.4MB
After applying the latest round of updates last night, I turned on my laptop this AM (ATI X1300 Mobility graphics running the default drivers), selected the latest kernel in GRUB, and waited to log in....only problem is that instead of seeing a normal loading screen (Blue with Fedora icon filling in), I instead get a black screen mostly covered in multicolored rectangles. I don't believe there was a graphics driver update, so I'm rather confused as to what went wrong/how to procede. I can still boot using a previous kernel.