I was trying to update the drive for my adaptec raid controller. Unfortunately, Adaptec only provides RPM packages. So I converted the package using alien. After dpkg install, I then tried using dkms to build the module:
Code: root@atulsatom# dkms add -m aacraid -v 184.108.40.206400 Adding driver was successful, but I got some error during the build Code: root@atulsatom# dkms build -m aacraid -v 220.127.116.11400 Kernel preparation unnecessary for this kernel. Skipping.
Am installing Fedora 10 on an old but good server that used to be a windows box. It has 2 sata disk raid level 1 on adaptec 2410SA card. Disks are clean no info even did a low level format and recreated RAID.. twice DVD boots and installs os on raid array and reports success. new volume appears under computer icon but cannot be mounted after reboot attempt reports Reading physical volumes. This may take a while... (it doesn't)
Volume group "VolGroup00" not found Unable to acces resume device (/dev/VolGroup00/LogVol01) mount: error mounting /dev/root on /ysroot as ext3: No such file or directory input: PS/2 Generic Mouse as /devices/platform/i8042/serio1/input/input4
and then a blinking cursor with keyboard echo but its just copying reboot with install DVD does not show previous (unmountable) disk image and repeat of above process increases operator need for alcohol have "disable write cache for drives" as I found that is a post but after fedora install.I do not know what grub is for instance but once linux is booted (on other computers) I can muddle through ok in terminal mode.
my setup P4 3.4GHZ 2GB Ram Gigabit Ethernet Drive Configuration 1 x 750GB Sata Connected To My Raid Controller in ide mode 1 x 120GB IDE HDD 1 x 250GB IDE HDD
my problem, I Am trying to install f12 and the only drive that it sees is the 750GB Sata,it is not seeing the other 2 ide drives The raid controller is an ite 8212 in ide mode The Bios Sees the drives just F12 doesnt
Im trying to install Debian on my server. Some hardware descriptions:
- Inel Xeon processor - 03:01.0 RAID bus controller: Adaptec (formerly DPT) SmartRAID V Controller (rev 01)
There is configured by hardware one RAID 5 on four scsi disks of 73GB. I tryed with Lenny and Squeeze versions and both presented the same error when the install try to install grub:
main-menu: INFO: Falling back to the package description for auto-install main-menu: INFO: Falling back to the package description for ai-choosers main-menu: INFO: Menu item 'grub-installer' selected
I have been trying for a few days to install CentOS 5.4 on an IBM x306 and I cannot get it to properly handle the Adaptec Embedded SATA HostRAID Controller. I have been working with Linux for a few years, but this is new territory for me. I typically use Debian-based distros, but I did some research on the IBM site and found out that RHEL is a supported OS for this machine. So, I decided to give Cent a try. I have some experience with Fedora, so it's not totally foreign to me.
Anyway, I'm a bit confused. Using the IBM RAID utility, I set up a mirrored pair of 1GB SATA HDDs. When I run the Cent installer, it sees the pair as a single array. I am able to partition the array and complete the install, but when I boot into the OS, it sees the drives as 2 separate devices, sda and sdb. I can pull either one of the drives and boot with a single disk, but it doesn't seem to behave as a mirrored pair. If I make changes on sda, they are not replicated to sdb. Also, I can't use the cli or GParted to format the existing space on the array. I get an error either way. I believe this is because Cent doesn't have a driver for the RAID controller, but I don't see why it would work in the installer, but not the installed OS.
My next approach was to start over and attempt to run "linux dd" at the start of the installation. I tried to find the driver for the controller on the IBM site so I could load it when prompted, but couldn't find a newer version than RHEL 4 Update 3 (I'm assuming this would coordinate with CentOS 4.3). I tried it anyway, but when I select the floppy during the setup, it tells me it's not for this version of CentOS. I read several times that there are .img files that might help me in the 'Images' directory of disk one, but I only see diskboot.img, minstg2.img, and stage2.img. I don't think any of these are what I'm looking for. I thought there was supposed to be a drvblock.img or driverdisk.img.
I need to add the LSI drivers for the 9750 RAID controller during the install. These drivers are not included in 10.04 (or 10.04.1) and I need to install onto the RAID device I've created. LSI provides the drivers and instructions here - [URL]
Here are my steps, with the drivers on a USB drive - Code: Boot from the installation CD and select Install Ubuntu Server. Press CTRL+ALT+F2 to switch to console 2 while Ubuntu detects the network. # mkdir /mnt2 /3ware # mount /dev/sda1 /mnt2 NOTE: LSI drivers are at /dev/sda1, via USB # cp /mnt2/9750-server.tgz /3ware # cd /3ware ; tar zxvf 9750-server.tgz # umount /mnt2
* Remove the USB flash before insmod command * # insmod /3ware/2.6.32-21-generic/3w-sas.ko
Press CTRL+ALT+F1 to return to the installer. Continue the installation as usual. Do not reboot when the installation is complete. Press CTRL+ALT+F2 to switch to console 2 again.
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I want to Install RHEL 4.7 64 bit on one my server (Supermicro Super Server) having RAID controller1. Intel2. AdaptecWe are using Adaptec.We are using RAID 1 with 2x320GB Hard disksPOINT: If we Install RHEL 5.3 it recognize RAID controller and show single Logical volume of 298 GB, Means working fine but when we try to install RHEL 4.7 it shows two hard disks of 298GB and 298GB,meanz its unable to recognize RAID controller.So, the issue is Driver of it, CD which we got from super micro having driver for RHEL 4 to RHEL 4 update 6We are making our DR site and its necessary for us to Install RHEL4.7 to make it identical.I searched a lot and spent more than three days on it continuously, And still unable to find the solution.
I installed on a Intel SR2612UR Server with integrated LSI Expander and a Adaptec 5805Z controller a Raid 6 Array with 12x 2TB Drives (2 ext4 Partition: 16TB, 2TB), but I get very poor read speed depending on the blocksize that I set it vary between 130 MB/s and maximally 170 MB/s which is really poor for a Raid 6 Array of 12 drives, on a similar Dell Server I get above 650 MB/s. The Adaptec Controller has the latest Firmware installed.
I have DL120 Proliant server that has a P212 raid card, if I install Lenny it works fine however I need squeeze. If I upgrade or install a new version of squeeze the raid controller is no longer visible. I have done some snoopping and it seems as though the ciss drivers have been replaced by the hpsa drivers but I still cant seem to get the raid card recognised any body got any tips ?
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
I had a RAID1 'device' build on two physical partitions on two drives. One of the disk controllers died and software RAID did the job - now I am working on the degraded array.
Now I want to put the old disk (sdb) back, and I am not sure what will happen. Both disks have 'raid auto' partitions. And sdb file structure from before of the failure. The raid code will find inconsistency between both partitions. What will it decide? Will it start coping from the currently running system (sda) all the data to the old one (sdb) at the boot time, as I wish?
I don't want to it to write from the old one to the new one, as some months passed and lots of changes happened to the data.
My problem is that I'm trying to install CentOS 5.4 x86_64 DVD ISO on Supermicro X7SBI server with installed Adaptec RAID 3405 controller.
I created RAID 5 array and is working fine (adaptec status says Optimal) but I can't install CentOS to that array (1.5TB size).
Whenever I try to install with: linux dd
I'm asked for a driver, which I have downloaded from Adaptec site and extracted contents to USB drive (in installation found as /sba1) which has now a lot of IMG and some ISO files on it.
I try to load (I simplified names) RHEL5.img, CENTOS.img... with x64 names (one exact name: aacraid driverdisk-CentOS-x86_64.img) and I always get the error message: "No devices of the appropriate type were found on this driver disk"
This is going on for a week now and I can't find the right driver or something I'm doing wrong to get install done.
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
I have been using lspci, dmidecode, and mpt-status to get hardware information on my Dell 1950 running Ubuntu 8.10. I'm pretty sure my server is using an embedded SCSI RAID controller from info I got from Dell's site:
PCI-Express is an actual...well, PCI card, right? But dmidecode shows that I have two x8 PCI Express slots that are both available. Sooo...I'm missing something. How am I running a PCI Express SCSI controller without using a PCI Express slot? In the event of not having the kind of info that I did (i.e. the service tag) how would I be able to tell at a glance whether a component like my RAID controller was embedded or not?
I have a VT6421 based raid controller. lspci shows this: 00:0a.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE RAID Controller (rev 50) The drivers that come with it appear to have been compiled against an old kernel (I'm guessing). When I try to load them I get invalid module format. dmesg shows this: viamraid: version magic '2.6.11-1.1369_FC4 686 REGPARM 4KSTACKS gcc-4.0' should be '18.104.22.168-smp SMP mod_unload Does anyone know of a way to get this to work? I found the source for this, but it appears to only support Fedora, Mandrak, and RedHat. I can't get it to compile or make a driver disk.
I'm working on a new server and it has an Nvidia SATA array controller with 2 250Gb SATA drives configured in a hardware array. When the first screen comes up I'm entering the option Linux DD for it to prompt me for the drivers but nothing ever happens. The screen says that it's loading a SATA driver for about 15 minutes and then the screen clears and has a plus sign cursor on a black screen. What am I doing wrong? The only driver that came with the HP server are for Redhat 4 and 5 and SUSE, will any of those actually work?
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
Why could there not be a 3-way or even 4-way RAID level 1 (mirror)? It seems every hardware (and at least the software I tested a few years ago) RAID controller only supports a 2-way mirror.I recently tried to configure a 3ware 9650SE RAID controller. I selected all 3 drives. Then RAID 1 was not presented as an option. Only RAID 0 (striping, no redundancy) and RAID 5 (one level of redundancy, low performance). Is there some engineer who thinks "triple redundancy is a waste, so I'm not going to let them do that"? Or is it a manager?
Mirror RAID should be simple, even when more than 2 drives are used. The data is simply written in parallel to all the drives in the mirror set, and read from one of the drives (with load balancing over parallel and/or read-ahead operations to improve performance, though some of this is in question, too).
I have IBM x3550 M3 machine with ServeRAID-M1015 on board. I set up RAID1 on machine and have question : 1. Is there some kind of software to manage hardware RAID from within operating system level? What I mean by saying "manage hardware RAID from oes level" is I would like to see status of disks in array, be able to initiate rebuild process and etc. This post doesn't relate software RAID only hardware RAID.
I have been thinking about upgrading my RAID setup from a pair of Intel X25s running the software based RAID included with ubuntu to four Intel X25s running off a PCIe based controller. The controller is an HP P400 which I know works great on ubuntu (I have other machines running it with SAS drives). My desktop has an Intel S5000XVNSATAR mainboard and I have an open PCIe v1.0 8x (physical) slot that is wired 4x. The controller is 8x and will fit fine. How much of a performance hit do you think I will see running this 8x controller in the slot wired 4x with four Intel x25s in a RAID 10? Will I have enough bandwidth with the 4x for those SSD's?
I bought a used server and it's in great working condition, but I've got a 2-part problem with the onboard raid controller. I don't have or use RAID and want to figure out how to stop or work around the onboard SATA raid controller. First some motherboard specs.: Arima HDAMA 40-CMO120-A800 [URL]... The 4-port integrated Serial ATA Integrated SiliconImage Sil3114 raid controller is the problem.
Problem 1: When I plug in my SATA server hard drive loaded with Slackware 12.2 and linux kernel 22.214.171.124, the onboard raid controller recognizes the one drive and allows the OS to boot. Slackware gets stuck looking for a raid array and stops at this point -........
I'm trying to resize an NTFS partition on an IBM MT7977 Server. It has a Adaptec AIC-9580W RAID controller. I was thinking about doing it with a gparted LiveCD/LiveUSB, but then I realised that they won't have drivers for the RAID controller. A quick google for "9580W Linux" doesn't return anything promising.