Hardware :: Amd SB750 Raid And 12 Install - Unable To Find Any Suitable Storage Devices
Apr 17, 2010
The intention is to have this system dual-boot. When i first put it together, i decided to setup a raid5 array spanning 3 sata drives. I installed Windows 7 first, decided i'd get to Linux later. I left 150mb or so at the beginning of the array for /boot, and about 200gb at the end for my linux install. i'm getting to the linux install. My distro of choice is Fedora 12. I start the setup, and at the point where it's time to partition, the installer tells me that its unable to find any suitable storage devices.
I Crtl-Alt-F2 to a console, and fdisk -l. Fdisk reports three individual drives which all have partitions already. All have free space. None make sense. So i turned to google, and found some threads which explain that this chip doesn't run a true raid, rather its what's been referred to as fake raid. Which is that it depends on the windows driver in order to actually present the array to the OS, and that the best way to get by that on linux, is to break the array, and use LVM instead.
That's all well and good, but i lose two things in doing that. First i lose the resiliency of raid 5, and second, well, what does that do to my windows install? I've considered moving all of my data from windows to other machines, and then just starting from scratch, but i'd really much prefer a method of using the chips fake raid in linux. Is there a driver, or module which i can install to make this happen?
I am trying to install some NVIDIA drivers on my machine. I went through the process and got this message:
Code: Select allWARNING: Unable to find a suitable destination to install 32-bit compatibility libraries. Your system may not be set up for 32-bit compatibility. 32-bit compatibility files will not be installed; if you wish to install them, re-run the installation and set a valid directory with the --compat32-libdir option. URL...Where in I ran this in the terminal to create the 32-bit document tree.
Code: Select allsudo apt-get install ia32-libs
E: Package 'ia32-libs' has no installation candidate.
I really just want to get these NVIDIA drivers up and running. I already installed and updated the headers to just be able to half-way install the drivers (the second monitor works now).
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
I'm interested in buying a new hardware for my company. The old server (now 10 years old) should be replaced with a new one. Till now, I was looking on different hardware suppliers, boards and different other places. I found a Tyan board [URL]. The hardware spec is quite interesting and the board would fullfill our claims.
how both storage devices will be supported by Ubuntu or Debian??
i got problem which slows my boot to linux. Before loading Ubuntu logo in console i see fast flash with words: "Warning: unable to find a suitable fs in /proc/mount, is it mounted? Use --subdomainfs to override." What should i do? To fix that warning?
I'm looking for advise on which drives to add into my server for software raid 5. I would like to use 2TB drives for the array. The server currently boots off a RAID 1 array and I have a couple other drives mounted until I build a RAID 5 array with new drives. I've read horror stories on using Western Digital WD20EARS and Seagate ST32000542AS. So I'm wondering which large drives are best to use in software raid?
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
My friend bought a new PC, and he's duel booting Ubuntu 10.04LTS and Windows 7 Ultimate. At first I tried partitioning the hardrive using fdisk in Ubuntu, but it couldn't find hda. After looking in /dev/, there wasn't hda, or any sd* devices. I looked in gparted and used the installer afterwards and both came up with nil. I installed Windows, updated the BIOS using the utility that came with the mobo, and tried seeing if Ubuntu could detect my hardrive and it couldn't. I even turned ACPI off when booting and that didn't work either.
The motherboard is a new Gigabit with 3 PCIe slots and a USB3.0.
I've been running F10 with a four disk RAID5 setup that has been working fine, however when I tried to do a fresh install of F11 I can't get past the "Select Country" and "Select Keyboard" screen of the GUI as a message will say "Detecting Storage Devices" and then throw up an error. I can't give you the full error, because when I click Save The Detecting Storage Devices box appears and I can not select anything under it or enter my Bugzilla account.However, when I plug just the hard drive that has XP on it then the installer continues swimmingly.I've backed up everything, so if I have to I could zero out the drives however they're all rather large so it'd take an inordinate amount of time.
I'm installing F12 on a new HP ML-350 G6. The machine has 4 Gb memory, and a Smart Array P410i SAS controller. There are two 72 Gb disks set up as a RAID 1. I'm using a full install DVD. There are no other storage devices other than the RAID, a USB tape drive, and the DVD drive. When I try to install F12 it hangs after setting the keyboard type with "Finding Storage Devices". I've let it sit for an hour or more and it doesn't get beyond that.
F11 installs on the machine with no problem. I've used the F12 DVD on other machines with no problem (but not any other ML-350s). If I switch to a console session, dmesg shows the driver for the controller (HP CISS) has loaded and the RAID drive recognized. The appropriate devices (/dev/cciss/c0d0, /dev/ccss/c0d0p0, etc.) are all present in /dev.
Using Fedora 15 64 bit. The problem is when I put in a USB stick (directly into USB port front or back), or SD memory card via Card reader, they take a long time to auto mount. About 30 seconds. I've tried a few different USB sticks and memory cards. Once mounted they work fine. This is a new install, been running for a few weeks, but the problem only seems to have started in the last few days. Also, not sure if it's related, but now Shotwell takes about 30 seconds to start. The screen comes up, but the interface in non responsive for around 30 seconds. Both USB and Shotwell problems seem to have started at the same time.
I'm still trying to find out if my coby mp3 player will actually play mtv video files as is advertised.
ffmpeg -formats does list mtv but the only command I really ever used was one to convert a vid to an mp3 so I tried Code: ffmpeg -i test.mp4 -acodec copy output.mtv it returns Code: Unable to find a suitable output format for 'output.mtv' I can't find any mtv files online for purchase or free for that matter, so I know this is all pretty obscure but shouldn't there be a way to convert them since ffmpeg lists mtv format?
Getting very frustrated trying to install F12 to my work PC. Its a dell precision 670 with a 3ware 9550sxu raid controller running raid 5. Its consistently crashing at the point when it says 'finding storage devices' with a message about : Mismatched Sizes. What I could try? I had no problem on this machine with fedora 11, so whats changed? Just for info by the way, theres two raid volumes on this system a 750Gb that contains my windows vista installation in a 500 Gb partition (NTFS) and the free space I'm using for fedora. The second volume is a 2TB data volume with a single data partition (NTFS).
Running various applications, including Open Office, I need to open files from my external hard drive, from within said application itself. But the file menus for all my applications list only the files of my primary hard drive, and I can't look at other drives. Surely there must be some way to do this.
I am running Lenny. USB storage devices are painfully slow, if the data to be copied is above 4GB it works on transferring for more than half an hour and then comes up with an error dialog(saying something like file size is too big). The problem exists in both read and write.
I did google a bit and here is the output of lsmod | grep hci ehci_hcd28428 0 uhci_hcd18672 0 usbcore118192 4 usb_storage,ehci_hcd,uhci_hcd
I'm trying to get a complete overview of booting so I can multiboot. An explanation of the hardware that stores data and the hardware that runs it with the paths the data takes would be awesome!
Here are some quotes that are not comprehensive.
Quote from [url] "When the processor first starts up, it is suffering from amnesia; there is nothing at all in the memory to execute. Of course processor makers know this will happen, so they pre-program the processor to always look at the same place in the system BIOS ROM for the start of the BIOS boot program. This is normally location FFFF0h, right at the end of the system memory. They put it there so that the size of the ROM can be changed without creating compatibility problems. Since there are only 16 bytes left from there to the end of conventional memory, this location just contains a "jump" instruction telling the processor where to go to find the real BIOS startup program."
System Memory is your RAM is it not? Why are they being specific in stating the address location in the Firmware that BIOS uses? An external EEPROM on the board is totally different from RAM is it not? Does the BIOS data travel to a specific RAM Location?
Is there a small processor connected to BIOS or is everything run with the Main CPU?
What exactly is the "chipset" that is referred to with booting?
I am running Debian on a g3 mac and when I set the screen resolution to 1024 by 768 I cannot see everything, for instance the scroll bar on iceweasal is hidden, so I switched the resolution to 800 by 600 and then i load up evolution and find that the forward button isn't visible, is there a way to get a custom resolution that works with everything
I've been poking around for the last week trying to find a suitable small business accounting software that will work for us. To make a long story short I stumbled upon SQL-Ledger and it looks like a very good application just what we're looking for... and low and behold its in the repositories! Anyway I installed the SQL-Ledger package but unfortunately there's no way to load or open this app. Synaptic shows its installed but there's no gui or and reference in the menu. Searching using the filter in the menu results in nothing. So I'm assuming SGL-Ledger just needs a gui associated with it to open it right??
I am currently running Lucid,and in every single attempt to play any mp3,it claims that it cannot find a suitable plugin,which is weird because I am nearly 100 percent certain that I did install all of the restricted extras.The funny part is that it is trying to install a Soundblaster Voc plugin?
I have the latest ubuntu V10 and trying to set up a raid 5 to use as storage. I have 3 1 TB drives along with the 160 GB OS drive. Is what I want to do possible and is there a gui interface to perform this or clear instructions on how to accomplish this? I am a novice when it comes to Linux but trying to ween myself off of Microsoft.
I am in a situation where I am stuck with a LVM cleanup process. Although I know a lot about AIX LVM , but this is first time I am working with Linux LVM2. Problem is that I created two RAID arrays on storage, which appeared as mpath0 & mpath1 devices (multipath) on RHEL. I created logical volumes and volume groups and every thing was fine till I decided to clean the storage arrays and ran following script:
#!/bin/sh cat /scripts/numbers | while read numbers do lvremove -f /dev/vg$numbers/lv_vg$numbers vgremove -f vg$numbers pvremove -f /dev/mapper/mpath$numbersp1 done
Please note that numbers was a file in same directory, having numbers 1 and 2 in separate line. Scripts worked well and i was able to delete definitions properly (however I now think I missed one parted command to remove the partition definition from mpath device. When I created three new arrays, I got devices from mpath2 to mpath5 on linux and then I created vg0 to vg2. By mistake, I ran above script again for cleanup purpose and now I got following error message
Cant remove physical volume /dev/mapper/mpath2p1 of volume group vg0 without ff[/B]
Now after doing mind search, I now realize that I have messed up (particularly because mpath devices did not map in sequence to vg devices and mapping was like mpath2 --- to ---- vg0 and onwards). Now how I can cleanup the lvm definitions? should i go for pvremove -ff flag or investigate further? I am not concerned about data, I just want to cleanup these pv/vg/lv/mpath definations so that lvm can be cleaned up properly and I can start over with new raid arrays from storage?