Debian Hardware :: Which RAID Controllers Work With 5?
Jun 2, 2010Which RAID controllers work with Debian 5? Is there a list somewhere?
View 2 RepliesWhich RAID controllers work with Debian 5? Is there a list somewhere?
View 2 RepliesI have downloaded and compiled the xboxdriver from Grumbels git hub. Running 9.10 minimal with xbmc 9.11. I can run the driver manually and actually run multiple instances of the driver for multiple controllers. I cannot however get more than one instance of the driver to start up at boot. I have the freshly compiled driver and daemon that came with source files in /usr/local/bin.
I can run the driver via 'sudo /usr/local/bin/xboxdrv --wid0', I can even get a second wireless controller working with 'sudo /usr/local/bin/xboxdrv --wid1'
When I add the driver to /etc/rc.local it runs fine on startup. The problem is I want more than one instance running. When I add the exact same line right under it except change the wid0 to wid1 it will not automaticly run the second driver. I also can get zero activity from the daemon. I have tried adding it to the rc.local, or /etc/init.d/local (I think someone mentioned it somewhere) with no luck.
I am not quite sure what I am doing wrong. My ultimate goal is to have 2 minimum to 4 maximum controllers work from startup. I want to add emulators to a xbmc htpc and just dont want to have to worry about manually starting drivers. I have forgone using xpad due to its somewhat limited support for wireless xbox360 controllers.
In windows the only controllers that really work with TRIM is the onboard Intel ones, Jmicron, Marvell, Nvidia and AMD's don't pass the command to the drives if you use their own drivers. (And from what Ive read, the Micron sata6 controllers don't work with TRIM even using the default Microsoft drivers). In Linux do more then just Intel drives work with TRIM? I have a Nvidia 790i Ultra motherboard, the SATA controller is an Nvidia one which has 2 settings, ATA and Raid (if the drives aren't added to an array it runs them in AHCI) and has an additional Jmicron sata port.
If I enabled TRIM in the OS, would it work on either of those controllers? Also if anyone knows this, in Windows if you set an Intel controller to raid, you wont get TRIM on SSD's that are on the controller (but not in an array) with the default Microsoft driver like you would if it was set to AHCI, you only get it with the Intel RST drivers. Would a SSD on an Intel controller set to raid but not in an array get the TRIM command passed to it in Ubuntu?
I have just installed Debian 6.0 and it does not seem to recognize my Promise Tx4650 RAID controller.
I have created a dual boot with WinXP 64. Windows sees my two RAID one arrays correctly (as two logical drives).
In debian I see four physical drives in file browser.
I contacted Promise tech support and they said that the only Linux they support is SUSE, but they did send me some source code to compile.
This problem is complicated by my newness to Linux and Debian.
Here are some specific questions:
1] Is there any debian package that is known to support the Promise Tx4650?
2] Where can I see if my Tx4650 is recognized?
3] If I have to compile it into the kernel, where are the directions for how to do that?
How can i set the order of controllers which Debian loads? One of my problems is that it is initializing my PCI-e controllers first and my OS drive is being named /dev/sde. Also is it possible to list which port of which controller a HDD is plugged into?
Code:
:~# lspci
00:00.0 Host bridge: ATI Technologies Inc RX780/RX790 Chipset Host Bridge
00:04.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port A)
00:06.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port C)
00:0a.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port F)
00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode]
[Code].....
I would like to buy PCI to SATA II controller for my machine. I'd like to use them on Debian Lenny stable (2.6.26-2 kernel). So I'm choosing between noname (Logilink ?) card with Sil3124 chip and HighPoint 1720 card. Functionality basically is the same for both cards or at least it is enough for my needs. I will run mdadm anyway. So the question is should I really pay 3 times more for HighPoint ? Is there any reason for that or Sil3124 will do the same ? What about compatibility ? As I know both cards are supported on thet kernel.
View 6 Replies View RelatedHow do I configure samba such that AD authentication still works when a DC is down? Do I need multiple kdc, admin_server, and kpasswd_server entries in krb5.conf?
View 3 Replies View RelatedI am trying to configure grub to be able to boot from any of the four hard-drives that I have, three of which are plugged into an nVidia RAID controller and the other is plugged into a JMicron controller. Grub seems to only see the loan one and not the other three.Is there a way to get Grub to see the disks attached to another controller?
View 4 Replies View RelatedI am looking for a way to partition 2 disks automatically with are both connected to 2 different scsi controllers.
I want one disk on one controller to be partitioned with boot and lvm and the other one with a different partition layout.
I there a way in kickstart to do this, like for instance specify the drive module?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
Can I control parameters in the Calf Plugin synthesizers(Monosynth and Organ) via MIDI controllers? And if so, how?
Also, if not, please tell me about software synths that i can fully control(i know Zynadd already
I've searched the web and all but nothing turned up.
I was running Ubuntu 9.10 with 3 - 1.5TB drives in a software Raid 5 array. The OS is installed to a separate drive. I decided to format the OS drive and install Ubuntu 10.10, but having done that I can no longer get my Raid array to start. When I am in the Disk Utility and hit the "Start RAID Array" button I get a message saying that there are not enough components to start the Raid array, even though the 3 raid drives show up within the Disk Utility.
View 1 Replies View RelatedI'm trying to put two identical disks in Raid 1 for a homeserver. I got Ubuntu 11.04 server 32 bits on a USB stick. Installated till partition menu. Now I get the question how I want to partition my disks. I choose manual, and two times entire disk, one for both. It looks like this:
(iPhone quality)
I go to configure software Raid, and select Raid 1, 2 active disks, 0 spare disks, and get this:
I can't choose the other 2TB partition. I think I have to configure Raid three times, one for each partition. Not?
I click the missing 2TB partition, and get this:
But I think there's no RAID device at all, I see this again when I go to Configure Software Raid, en try to delete a RAID device:
How do I get Disk 1 and Disk 2 together in Raid 1? [URL]
I got two harddisks, sda and sdb. Is it possible to install Debian root into software raid partitions sda2 and sdb1 leaving all other partitions 'normal' (not-raid)? do partitions sda2 and sdb1 need to be exact same size and position?
View 4 Replies View RelatedI started out with a RAID 0 of 3x 500GB drives. Partitioned to a 200gb Windows 7 install (plus the 100mb partition), a 50gb partition I was going to use for Ubuntu 9.10, and the around 1081GB NTFS storage partition. It wouldn't work, I tried a bunch of things, EasyBCD, ect, rebuilding grub, and everything else... wouldn't work for me. So I gave up, backed up everything and then rebuilt my RAID into to "separate" drives so that I could just change the booting drive depending on the flavour of frustration I was looking for at the time. Installed W7 back, installed Ub9.10 back. Windows loaded through it's drive fine. Ubuntu will not freaking load. So I ran a script I found laying around on one of the many.... MANY posts I read and will attach the outcome.
[Cide]...
I ve got a SATA HDD which I use for storage connected to a (now quite old) RAID PCI card (HighPoint Rocket RAID 1520). Ive added another (blank) HDD, same brand and size to the PCI card. Ubuntu (10.04) can see both hard drives and access data. Id like to mirror (raid 1?) these HDDs. Looking around this forum Ive noticed quite a few people mentioning FakeRAID. Turns out that's not quite the same as software RAID. Given how cheap the 1520 was, I suspect it's FakeRAID rather than Hardware. Perhaps someone can confirm? Given Ubuntu can see both HDDs, would this mean it ll have the correct drivers to work with a hardware/fake raid? A common recommendation is to use mdadm to create a software RAID. How does this work for partitions accessed from multiple Operating Systems including Windows XP?
View 3 Replies View RelatedI bought a used server and it's in great working condition, but I've got a 2-part problem with the onboard raid controller. I don't have or use RAID and want to figure out how to stop or work around the onboard SATA raid controller. First some motherboard specs.: Arima HDAMA 40-CMO120-A800 [URL]... The 4-port integrated Serial ATA Integrated SiliconImage Sil3114 raid controller is the problem.
Problem 1: When I plug in my SATA server hard drive loaded with Slackware 12.2 and linux kernel 2.6.30.4, the onboard raid controller recognizes the one drive and allows the OS to boot. Slackware gets stuck looking for a raid array and stops at this point -........
I am looking to build a server with 3 drives in RAID 5. I have been told that GRUB can't boot if /boot is contained on a RAID arrary. Is that correct? I am talking about a fakeraid scenario. Is there anything I need to do to make it work, or do I need a separate /boot partition which isn't on the array?
View 3 Replies View RelatedWe've started using Debian based servers more and more in work and are getting the hang of it more and more every day. Right now I'm an ace at setting up partitions, software RAID and LVM volumes etc through the installer, but if I ever need to do the same thing once the system's up and running then I become unstuck.
Is there any way I can get to partman post-install, or any similar tools that do the same thing? Or failing that are there any simple guides to doing these things through the various command line tools?
I am currently running Debian Squeeze on a headless system mainly used for backup.It has a RAID-1 made up of two 1TB SATA-Disks which can transfer about 100 MB/s reading and writing. Yesterday I noticed that one of the disks was missing from the RAID configuration. After re-adding the drive and doing a rebuild (ran with 80-100MB/s)
View 5 Replies View RelatedI installed mdadm fine and all and proceeded to run:mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sda /dev/sdbWith sda being my primary hard drive, and sdb being the secondary.I get this error message upon running the command"mdadm: chunk size defaults to 64Kmdadm: Cannot open /dev/sda: Device or resource busymdadm: create aborted"I don't know what's wrong!
mdstat says:
"Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
unused devices: <none>"
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 Replies View RelatedHow long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
View 1 Replies View RelatedI had a RAID problem on an HP Proliant server due to a failing disk. When I changed the disk, things got complicated and RAID seemed to be broken. I put back the old disk, repaired the RAID, then changed put the new disk again and all returned to normal except the system doesn't boot. I am stuck at grub stage (the grub rescue prompt). I grabbed a netinst CD and tried to rescue it, at some point, the wizard correctly sees my two partitions sda1 and sda2 and asks wether I want to chroot to sda1 or sda2. I had red screen errors on both.
The error message said to check syslog, syslog says can't mount EXT4 file system beacuse bad superblock. I switched to TTY2 (Alt-F2) and tried fsck.ext4 on sda1 (I think sda2 is the swap because when I ran fsck on it it said something like this partition is too small and suggested that it could be swap) it says bad superblock and bad magic number. I tried e2fsck -b 8193 as suggested by the error message but that too didn't work (I think the -b 8193 is for trying to get he backup superblock).
The RAID is as follows : One RAID array of 4 physical disks that are grouped into one Logical Volume /dev/sda, so the operating system only sees one device instead of four (4 disks).
After months of using Lenny & Lucid Lynx without issues, I come back to the good existential questions.
I'd like a completely encrypted disk (/ and swap) in addition to the Xp partitions (not that safe but I'll switch completely to Linux once I have solved everything.
1. I create an ext4 partition for /boot
2. One other (/dev/sda7) that I set for encryption,
3. On top of that, I create a PV for lvm2,
4. I add to a VG,
5. I create / & swap in the VG.
However, if I add a hard drive, I will have to encrypt the main partition, add it to the VG & then expand /. So I'll need 2 passwords at boot time to decrypt.
So I'd like to:
-Encrypt the VG directly, it would solve everything but no device file appears for the VG, only the PV and th LV.
-After hours of search, I couldn't find a solution for a single password...
Maybe a hope with a filesystem like btrfs in the future providing encryption, but I'll still have to create a swap partition out of it (or create a file for swap but no hibernation possible).
My system includes two 120GB disks in fake raid-0 setup. Windows vista is installed on these.For Debian I bought a new 1 TB disk. My mission was to test Debian and I installed it to the new disk. The idea was to remove the disk afterwards and use windows as it was before. Everything went fine. Debian worked perfectly but when I removed the 1 TB disk from system grub will show up in boot in grub recovery mode.
Is my RAID setup now corrupted? Grub seems to be installed on the other raid disk? Did grub overwrite some raid metadata? Is there any way to recover the raid setup?
dmraid -ay:
/dev/sdc: "pdc" and "nvidia" formats discovered (using nvidia)!
ERROR: nvidia: wrong # of devices in RAID set "nvidia_ccbdchaf" [1/2] on /dev/sdc
ERROR: pdc: wrong # of devices in RAID set "pdc_caahedefdd" [1/2] on /dev/sda
ERROR: removing inconsistent RAID set "pdc_caahedefdd"
RAID set "nvidia_ccbdchaf" already active
ERROR: adding /dev/mapper/nvidia_ccbdchaf to RAID set
RAID set "nvidia_ccbdchaf1" already active
Have Lian Li Ex 503 External Raid System, using 4x2TB, using Raidmode 10 for good performance [ Just for those who are interested: http://www.lian-li.com/v2/en/product/pr ... ex=115&g=f ]
[Code]...
But using e-sata my transfer rates are very low (from internal drive to external ex503), around 60-70mb/sec
But hdparm tells me:
[Code]...
I have created a system using four 2Tb hdd. Three are members of a soft-raid mirrored (RAID1) with a hot spare and the fourth hdd is a lvm hard drive separate from the RAID setup. All hdd are gpt partitioned.
The RAID is setup as /dev/md0 for mirrored /boot partations (non-lvm) and the /dev/md1 is lvm with various logical volumes within for swap space, root, home, etc.
When grub installs, it says it installed to /dev/sda but it will not reboot and complains that "No boot loader . . ."
I have used the supergrubdisk image to get the machine started and it finds the kernel but "grub-install /dev/sda" reports success and yet, computer will not start with "No boot loader . . ." (Currently, because it is running, I cannot restart to get the complete complaint phrase as md1 is syncing. Thought I'd let it finish the sync operation while I search for answers.)
I have installed and re-installed several times trying various settings. My question has become, when setting up gpt and reserving the first gigabyte for grub, users cannot set the boot flag for the partition. As I have tried gparted and well as the normal Debian partitioner, both will NOT let you set the "boot flag" to that partition. So, as a novice (to Debian) I am assuming that "boot flag" does not matter.
Other readings indicate that yes, you do not need a "boot flag" partition. "Boot flag" is only for a Windows partition. This is a Debian only server, no windows OS.
Have a new debian install on a asus h170m-plus (was going to use ubuntu but didnt support the hardware/software combo i needed)
Install is fine. but during install it didnt see my 1tb raid1 drive..
after reboot, debain boots great, and i can mount the raid drive in the file manager.
I can see it and in mtab it shows up :
"/dev/md126 /media/user/50666249-947c-4e8f-8f56-556b713a6b6a ext4 rw,nosuid,nodev,relatime,data=ordered 0 0"
How can I permanently add this mount point so it is found at boot up at /data...