2) Use MKINITRD to create a new INITRD file which loads the DMRAID module.
Neither solution showed any detail of how to accomplish this, and which files to edit or what order I should use to tackle either. With the second solution, I have another SUSE 11.2 installation on another hard drive. Would it be ok to boot into that and create a new INITRD with DMRAID activated, or would it be better to break the RAID-set boot into one of the drives and create INITRD for that RAID 1 system, then recreate the RAID-set? The only issue I would see is that fstab, device mapper and grub would need the new pdc_xxxxxxxxxx value, which can be changed from the second installation.
My system is:
Asus M4A78T-E with fakeraid SB750 controller
AMD Black Edition - AMD Phenom II X4 3.4 GHz Processor
2 x Samsung 1TB drives
Suse 11.2
I have Windows XP on another hard drive purely for overclocking and syncing the 1TB drives, but I can't actually boot into the SUSE system until this issue is sorted out.
i have 2 partitions on dmraid. I am not able to configure them to mount with yast; yast partitioner gives an error stating that it can't mount a file system of unknown type. I am able to start the dmraid devices manually and mount them manually.
See bug:
https://bugzilla.novell.com/show_bug.cgi?id=619796 for more detailed info.
i ve run into problems while installing 11.3 x64. Installer stops at search for linux partitions... to solve it i ve had to go back to 11.2. Anyway, i have installed 11.3 on another hdd (3hdds in raid 5 had to be disconnected). When i go into partitoner (11.3) and device graph, i see two of three raid hdds ok with sda1, sda2.... but the third one is without any partitions./ if i do the same in 11.2 all three hdds are with all partitions in that graph. Anyone know the solution except not installing 11.3? .)
I'm pretty new to FakeRAID vs JBOD in a software RAID, and could use some help/advice. I recently installed 11.4 on a brand new server system that I pieced together, using Intel RAID ICH10r on an ASUS P8P67 Evo board with 2500K Sandy Bridge CPU. I have two RAIDs setup, one RAID-1 mirror for the system drive and /home, and the other consists of four drives in RAID-5 for a /data mount.
Installing 11.4 seemed a bit problematic. I ran into this problem: [URL]... I magically got around it by installing from the live KDE version with all updates downloaded before the install. When prompted, I specified I would like to use mdadm (it asked me), however it proceeded to setup a dmraid. I suspect this is because I have fake raid enabled via the bios. Am I correct in this? Or should I still be able to use mdadm with bios raid setup?
Anyways, to make a long story short, I now have the server mostly running with dmraid installed vice mdadm. I have read many stories online that seem to indicate that dmraid is unreliable versus mdadm, especially when used with newer SATA drives like I happen to be using. Is it worth re-installing the OS with the drives in JBOD and then having mdadm configure a linux software raid? Are their massive implications one way or another on if I do or do not install mdadm or keep dmraid?
Finally, what could I use to monitor the health and status of a dmraid? mdadm seems to have it's own monitoring associated with it when I was glazing over the man pages.
I have a truecrypt partition on an nvidia fakeraid raid0.ore there is no data recognizeable to ubuntu's kernel on startup and I think it is causing a big slowdown for me every time I boot up, which takes over ten minutes every time. Attached are bootchart results.What I would like to do is know how to remove dmraid from the startup process. I can't find any scripts in /etc/rc*.d and all of the information I can find on these forums is about how to enable dmraid on boot, not disable it.
Manually using dmraid takes nearly no time at all, which is what I did before this kernel upgrade since ubuntu didn't put dmraid in the startup process. I can't see why it takes dmraid ten minutes to detect raid devices now! The only clue I can see is that in the startup process with the splash screen disabled it talks about examining inodes on both drives in the array and then goes on to spit out stuff about raid 0, raid1, raid2, raid3, raid4 and on and on so it seems like it is loading raid arrays that don't exist and never have.If any bootchart gurus have any advice, please feel free to throw me a bone, I am in fear of restarting!edit: for some reason ubuntuforums shrank my images so that they are unreadable by anybody except those with a large projector.
how do i install a linux distro that doesnt natively support Intel fakeraid, using dmraid and a livedisk. the raid is already setup, its just that backtrack cant find it because it doesnt have the right software.
I have two Ubuntu 8.04 servers currently. I've purchased some 500 GB hard drives and a RAID cage just yesterday and plan to change my current servers to using RAID1 with the hard drives. Much like everyone else, I have the nVidia RAID built onto the motherboard that I plan to use. There are many how-to's out there on how to setup RAID using dmraid during the install process - but what would a person do if they are simply changing over to a RAId system? I installed dmraid a few days ago on the server. It seems that even though I have RAID shut off in the BIOS, it saw the one and only drive as a mirrored array. When I rebooted today, the server would not start; it had ERROR: degraded disks.. bla bla. Then Grub tried to start the partition based off the UUID of the drive (which was not changed) and it said "device or resource busy" and would not boot.
This problem was corrected by going into the BIOS and turning on RAID. When I rebooted, the nVidia RAID firmware started and said degraded. I went in there and deleted the so-called mirror keeping the data intact. Rebooted, disabled the RAID feature in the BIOS and then the server loaded normally with a message "NO RAID DISKS" - but at lest it did boot! So this leads me to believe that trying to turn on a RAID-1 array (just mirror between two disks) may be a challenge since I already have the system installed. I will be using Partimage to make an image of the current hard drive and then restore it to one of the new drives.
The next question is - is it a requirement that dmraid is even installed? While I understand that the nVidia RAID is fake-raid - if I go into the nVidia RAID controller and setup the mirroring between the two disks before I even restore the files to the new disk, will both drives be mirrored? However, I do think that I'd probably have to have dmraid installed to check the integrity of the array. So, I'm just a bit lost on this subject. I definitely do not want to start this server over from scratch - so curious to know if anyone has any guidance on simply making a mirrored array after the install procedure, what kind of Grub changes will be needed (because of the previous resource is busy message), and whether installing dmraid is even a requirement.
I've googled my problem but I'm not sure I can find an answer in layman's terms. So here's my noob, simple question, please answer it in semi-noob-friendly terms I've been trying to install ubuntu for a while on my desktop pc. I gave it another go with 10.10 but I always have the same problem:
I've got two raid sets connected to an ich10r chip and they work fine in windows (2 samsung 1to + 2 raptors 75gb). Upon installation, dmraid only sets up the first raid set (Samsung array) but not the second one (Clean raptors intended for ubuntu). I don't have any other installation option, all my sata connectors are unavailable. So, is there a manual install solution? Can I force dmraid to mount the second raid set and not the first one? I think I read somewhere that this was a dmraid bug, but I can't find it anymore.
I'd been using 10.04 for a while and then one day the computer wouldn't boot. It just loaded up to a low res purple screen with the loading dots on it and froze. I managed to get all my files back and everything and re-install and it was working fine until I enabled the graphics card and then the same problem occurred. I've isolated the problem to the graphics card. It's never given me issues before and I've been running Ubuntu for about 2 1/2 years now so I was kind of surprised.
It's an NVIDEA card by the way. Any suggestions as to what I should do? I need hardware support for graphics because I do some work in 2D and 3D and as such need to be able to do that stuff on my PC. I don't want to have to keep reinstalling to check if the graphics card is working again yet but it's the only thing I can think of =(
I have a relatively new server (Ubuntu Server 10.04.1) with a "/backup" partition on top of LVM on top of an MD raid 1.Everything generally works, except that it freezes during the fsck phase of bootup, with no errors. I've given it 20 minutes or so. If I press 's' to skip an unavailable mount (documented here), it reports that /backup could not be mounted.here are no LVM related messages in /var/log/messages, syslog, or dmesg.
When I try to mount /backup manually, it reports that the device (/dev/vg0/store) does not exist. Apparently the volume group was never activated, though all documentation seems to claim it should happen automatically at boot. When I run "vgchange vg0 -a y", it activates the volume group with no issue, and then I can mount /backup./etc/lvm/lvm.conf is unchanged from the defaults. I've seen posts mentioning the existence of a /etc/udev/rules.d/85-lvm2.rules , but no such file exists on my server, and I'm not sure how I would go about creating it manually, or forcing udev to create one.There are some open bugs describing similar problems, but surely it doesn't happen to everyone or there'd be many more[URL]
I just installed debian Jessie (8) and find that the system cannot resolve URLs. Checking with the "hosts" command confirms this. I am connected to a comcast network in my house. I have looked over some of the documentation and it seems like there are several conflicting ways of setting up the resolver. It is not clear if the install process did this for me. I am assuming that DHCP was installed as the computer had no problem defining IPs for its wlan0 and eth0 ports and I can telnet in from and in-network PC. I have a few questions :
1) I see /sbin/dhclient/,,,,, running so i assume this is the DHCP deamon in use? 2) Before I dig too deep I wonder if an internal firewall is involved. Is there a command to shut it down temporarily? 3) what would I look for to determine if a resolver was installed and which one?
I see from my windows machine an apparent comcast DNS server but I don't believe I can code it into resolv.conf on debian as the OS now overwrites this(I already tried and failed!).
Upon turning on compiz, all the title bars above windows are lost, I am using the drivers on NVIDIA's website for my Geforce MX 4000 and I am using gnome. I tried doing what it says here, but the title bars are still lost.
I've been having problems with connecting to my router with my wireless card. Sometimes knetworkmanager attempts to the connect to my wireless via it's saved profile but it just stays on "Activating" for about 45 seconds then just stops. This only started happening a few days ago, so maybe the new kernel update has something to do with it?
I've tried to debugging the problem myself and have found if I reboot my router knetworkmanager can connect immediately to the wireless router, but also something interesting I found was is I assigned a IP Address and DNS manually on the saved profile it would connect with no problem (No reboot of router required), so it is indicating there is a problem with getting network settings. I've confirmed that the wireless card is not hard or soft blocked through rfkill.
I was using the box standard ath5k driver when this problem started happening and even went as far as a complete reinstall but ironically enough on first boot from a fresh install my wireless could not connect with the problem described above. I've since moved to the compat-wireless drivers but the problem remains.
I checked a couple of logs, one log file of significance was the wpa_suppliment log which was full of these messages: From all my debugging I can only assume that the kernel update is a possible cause for all of this as the problem occurring on first boot of a fresh install sounds like a general bug. I've got all of the requested information about my wireless card below, hope I've got everything:
Is it possible to map the same "Present Windows - All Desktops" action which is invoked by activating the top left corner of active screens via a keyboard shortcut? I basically want to ape OSX and have an expose-like button, the screen edge works fine for now, but I would love a keyboard shortcut also. Is it possible? Also, when I set up a custom keyboard shortcut to run yast, I initially tried using the same command from the KDE menu: /usr/bin/xdg-su -c /sbin/yast2
However, when I use the keyboard shortcut to launch this command, I get problems when trying to use zypper after the initial launch using the keyboard shortcut(from yast GUI or CLI). The error basically states another instance of yast2 already exists. The problem is only with the app/repo management modules though; for instance I can launch yast (using keyboard shortcut or start menu) and access hardware info, sudo users etc, I just can't access software/repo management stuff. Even when I kill the offending PID from the CLI I still get the error and have to restart the box to rectify it proper. So I changed the keyboard shortcut command to : /usr/bin/kdesu /sbin/yast2
and this seems to have resolved the problem. My second question is what is the difference between the two in this particular instance and why does one cause problems (xdg-su) and the other does not (kdesu).
I have a Dell Inspiron 530 with onboard ICH9R RAID support. I have successfully used this with Fedora 9 in a striped configuration.Upon moving to Fedora 13 (fresh/clean install from scratch), I've noticed that it is no longer using dmraid. It now appears to be using mdadm. Additionally, I need to select load special drivers (or something close to that) during the install to have it find my array - which I've never had to do before with F9. While the install appears to work ok and then subsequently run, my array is reporting differences .. presumably because it is trying to manage what mdadm is also trying to manage. More importantly, I can no longer take a full image and successfully restore it as I could with the dmraid F9 install. Is there anyway to force F13 to use the dmraid it successfully used previously?
Dual Booting my laptop and unable to change the Boot Records on the drive. Not because I dont know how, but my primary OS will fail to boot(win7).
I have drive partitioned as follows... sda1 = Win7 system (default install) sda2 = Win7 Main (default install) sda3 = swap sda4 = Extension (I think thats what its called) sda5 = / (ext4)
What I need is a boot cd or perferably Grub installed on a 256MB Thumb drive with the options to load the installed system from sda5.
I tried to install 11.3 on my acer aspire 7530 notebook to have dual boot with xp.
I made 4 partitions: one for xp, and the three for linux were made automatically.Before installation I got the warning that the partition wasn't entirely below 128 gb, I installed anyway to give it a try.
The installation froze at 92% and after the laptop wouldn't boot.
Now I've formatted the hard disk and installed windows on a partition leaving a free un formatted partition of 100 gb.
Out of curiosity and stupidity, I configured 2 extended partitions to LVM in gparted. Now, I can't boot into X window, and there's only GRUB command line during boot.
I turn back to openSUSE and install it in my machine (win7 installed first),but i can't boot from win7. openSUSE doesn't boot from win7 (like ubuntu) and i can't see ntfs win7 partition from openSUSE. Why openSUSE is so complicated about dual booting
I have a Dell laptop with Windows XP installed, and for various reasons (Help: I borked my WindowsXP boot when installing OpenSUSE 11.3) I can not install a GRUB boot loader to the first hard drive (hd0).
I currently have a second hard drive in this laptop with a perfectly working OpenSUSE 11.3 instance, but no way to boot into it. I remember back in ancient times, a common option with Linux distros was to create a boot floppy to boot into Linux rather than installing GRUB or LILO to MBR. Since this laptop doesn't have a floppy drive I'd like to do the same thing with a USB stick. Is there any way to install GRUB (or something similar) to a USB stick? What I am not asking here is whether I can put a full, bootable Linux instance on a USB drive - I only want a boot loader on USB that launches to the appropriate mount point on (hd1).
I'm trying to dual-boot Windows 7 with openSuSE 11.4, i was told that i should install SuSE after windows 7 as it takes care of the boot-loader and automatically detects my windows installation and not vice-versa, But that is not true in my case.
So i had 2 hard disks one had windows 7 installed and one was empty so i decided that i should get openSuSE 11.4 on the empty hard disk and dual-boot it with windows 7 (that i already had installed). Downloaded the DVD, put it on a USB and installed SuSE on the other hard disk normally, it detected my windows installation on my main hard disk but i didn't touch that, only formatted my other hard disk to ext4
After the installation it booted automatically into SuSE, but now every time on a fresh restart the system boots automatically into windows. Methods i have already tried to resolve this and it didn't work:
1. Booted from the DVD and selected an "Upgrade" not "New Installation" so i could boot again into my SuSE installation which did work, checked my "Boot Loader" options from YaST and checked the "Boot from MBR" option instead of the "Boot from root partition" option, That Did NOT work.
2. Used the same method to Boot into SuSE with the "Upgrade" Option opened up the terminal and tried to install grub manually again using this link
things "seem" to work, first time I've really ever used dmraid (usually mdraid), but I'm worried about this error
dmraid -ay RAID set "jmicron_STORAGE2 " was activated
The dynamic shared library "libdmraid-events-jmicron.so" could not be loaded:
libdmraid-events-jmicron.so: cannot open shared object file: No such file or directory
Two things, why no matter what I name the RAID in the jmicron bios is puts a bagillion spaces after it, and second, I cannot find the missing lib anywhere I've installed all the dmraid* packages.
About a year ago I bought a new compy and decided to get on-motherboard RAID, and by golly I was gonna use it, even if it wasn't worth it.
Well, after one year, two upgrades, a lot of random problems dealing with RAID support, and a lot of articles read, I have come to my senses.
The problem: I have a fakeraid using dmraid, RAID 1, two SATA harddrives. They are mirrors of eachother, including separate home and root partitions. I actually found the method I think I had used here: [URL]
The ideal solution: No need to reinstall, no need of another drive, no need to format.
My last resort: Buy a drive, copy my home directory, start from scratch, copy my stuff over.
I'm trying to rescue files from an Iomega NAS device that seems to be corrupted. This is the Storcenter rack-mount server - four 1tb drives, celeron, 1gb, etc. I'm hoping there's a live distro that would allow me to mount the RAID volume in order to determine if my files are accessible. Ubuntu 10.10 nearly got me there but reported "Not enough components available to start the RAID Array".
I got the serious problem after update my opensuse 11.2, after update the message appeared and said restart my machine to updates take effect and after restart system doesn't boot GUI workspace it boot into text like space named "Emerald - Kernel 2.6.31.8.0.1 - desktop (tty1)".What can I do to boot my machine into GUI again?
I'm new to OpenSuse and also fairly new to Linux in general. I installed OpenSuse 11.2 on a secondary machine and I really like it. However, during booting it stops somewhere half way, giving me only a black page but with a functioning cursor arrow. I can't do anything with it though. I re-booted in recovery mood and managed to boot up as root using the command 'startx'. How can I get back to "my own" log-in from here, with my own settings etc? There's obviously something missing during boot-up but where do I look?