Slackware :: 13.1 And Dell T610 With H700 SAS Raid?
Jun 21, 2010
I'm trying to install Slackware 13.1 x64 on a Dell T610 with a SAS Raid Controller H700 and the following error appears: Failed initialization of WD-7000 SCSI Card / megasas: FW now in Ready state. not recognizing my disks!
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
I want to install opensuse on a poweredge 830 server with a cerc 1.5/6 hardware raid configuration, since i have no previous experience with linux I'm having a hard time to grasp the way to do things. But i would love if someone can direct me to a good tutorial or document where i can learn more. At this moment I'm stuck (though i decided to install using default options yast gave me, just to test) with partitioning and RAID.
So this is what i would like to know:
Conditions: *Server currently has a windows 2003 installed but I don't need it so actually that is going to go away *this is a small network, less than 30 users *Server is going to be used to authentificate users through openldap, also is going to serve files (since we do printing many of them would be 100mb +), and also depending on performance may be used as printing server too (i know that's a lot of tasks but besides printing other functions won't be used that much) *Server has 2 phisical HD that i want to use with RAID 5 or 6 there are drivers for that already in dell's webpage * I'm installing openSUSE 11.4
Questions
1.- Does linux really needs a dedicated swap partition 2.- If so how raid works with that, is it going to be mirrored too (I don't think) 3.- Is it better to install linux in 1 HD only with no swap partition, leave the other blank and install raid after linux is up 4.- What's up with ext units, in the sense that now it looks like there are too many units.. not only the simple 'os, data' i'm used to
Objective 1.- I just want to have a linux installation with raid 5 configuration using 2 hd
I have had no problem installing Fedora OS on any of my Dell servers prior to this post. Anyway, I wouldn't call this a problem but recently, we bought another DELL server with Quad Core, 4GB, etc... AND this model has 2 swappable SAS Harddrives.
I wouldn't call myself an expert but then again I am not a newbie too. However, I have never setup any RAID before and now I am forced to setup RAID1 on this server. So, in a way, I am a newbie in setting up RAID
Would someone please point me in the right direction as I have no idea what I am supposed to do to setup the RAID. FYI, I will be installing Fedora 10 64bit on this server. I would appreciate if you can start from the very beginning, ie. partitioning, formatting the harddrives during OS installation, etc..
I have a Dell Studio XPS running openSUSE 11.2 with dual mirrored disks (using Dell's SATA controller). Does anyone know how I can set up automatic monitoring of the disks so that I will be informed if either fail? I think smartd might be what I need here. Is that correct? I added: /dev/sda -a -d sat -m <my email> /dev/sda -a -d sat -m <my email>
smartd is running, but how do I know that it will report what I need? I also have a client with a Dell PowderEdge SC440 with SAS 5/iR also running openSUSE 11.2. They also require automatic monitoring. There doesn't seem to be a SAS directive for smartd. I notice that the newer release says it does support SAS disk. I upgraded to 5.39. On restart (with DEVICESCAN as the directive) I get the following in /var/log/messages for my SAS RAID disk.
Sep 18 10:47:26 harmony-server smartd[25234]: Device: /dev/sdb, Bad IEC (SMART) mode page, err=4, skip device I ran: smartctl -a /dev/sdb and got the result: smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (openSUSE RPM) Copyright (C) 2002-10 by Bruce Allen, smartmontools
Device: Dell VIRTUAL DISK Version: 1028 Device type: disk Local Time is: Sat Sep 18 11:32:08 2010 JST Device does not support SMART Error Counter logging not supported Device does not support Self Test logging
Is there some other tool/package that does support DELL virtual disks?
I just got my new server today (a Dell Poweredge 2850), and I'm trying to install the latest version of Ubuntu on it, however it fails to detect my hard drives. They are in a raid array (raid 1 I believe). I've never worked with a raid array before, and I'm wondering, is there anything special I have to do to install Ubuntu on one?
got a dell powerconnect sc1425 running 2 250gb sata drives configured as raid 1 on a CERC onboard raid controller. boot up into 10.04 x64 server install, all goes well until after network discovery it goes to detect hard drives. a message appears "one or more drives containing serial ata raid configurations have been found. do you wish to activate these raid devices?" yes/no whether or not i hit yes or no, the next screen doesnt have any drives listed, just configure iscsi, undo changes to partitions and finish/write changes to disk.so where are these drives? do i have to load drivers like in winblows setup?
I am trying to install Centos onto a Poweredge 2650, with Dell Poweredge Expandable Raid Controller. I have 5 SCSI disks installed, and have created and initialised 2 logical volumes via the SCSI controller setup utility (Ctrl M after boot). After reboot the system reports two logical volumes present.
The Centos installer cannot find any disks when it gets to the disk partitioning/setup step. It reports no disks present. Do I need a specific driver for this controller?
Fedora is having trouble identifying a raid partition, it sees them as separate drives. I got drivers from dell, but during a Fedora dvd install, it mentions nothing about a place to install extra drivers.
When it says it must "initialize" the drives, Fedora then breaks the dell bios raid. How to either install the dell drivers or make Fedora see the raid partition as one?
When I installed Slackware 13.37 for the first time, there was no problem. Everything worked very well - as expected from this great distro.
But, the LILO won't detect Windows installation correctly. So I tried to edit it using liliconfig, where I wrongly entered the entry for Linux. So without further messing around with the system, I went for reinstalling the whole system.
Now when I chose 'Linux' in LILO menu, Some initial startup scripts ran correctly - like detecting and mounting the root and non-root file systems.
But then the suddenly screen went black. It happened so fast, then and again, that I couldn't take a look at what's wrong.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I look after some small office where computers run ubuntu. Sometimes they phone me for help. For that reason, I decided to install ubuntu alongside with my slack.I seem to have problems with lilo configuration. Ubuntu is installed on software raid :
/boot = md0 (raid 1 of sda1+sdb1) / = md1 (raid 0 of sda2+sdb2)
I have a Gigabyte P55-UD3 motherboard, I created a raid 0 array in the bios with the integrated gigabyte raid controller. can I install Slackware 64 on it, can I make it bootable, could I have multiple Operating Systems(Windows too)without each of them corrupting the partition table.If yes then how?(I would prefer not using extra bootdisk)
I use slackware 13.1 and I want to create a RAID level 5 with 3 disks. Should I use entire device or a partition? What the advantages and disadvantages of each case? If a use the entire device, should I create any partition on it or leave all space as free?
I have a VT6421 based raid controller. lspci shows this: 00:0a.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE RAID Controller (rev 50) The drivers that come with it appear to have been compiled against an old kernel (I'm guessing). When I try to load them I get invalid module format. dmesg shows this: viamraid: version magic '2.6.11-1.1369_FC4 686 REGPARM 4KSTACKS gcc-4.0' should be '2.6.27.7-smp SMP mod_unload Does anyone know of a way to get this to work? I found the source for this, but it appears to only support Fedora, Mandrak, and RedHat. I can't get it to compile or make a driver disk.
Basic Problem: I have been trying to install 13.1 (64-bit) and have not been able to get lilo to install.
Procedure:
1) partitioned drive /dev/sdc 1GB (Linux RAID) and 499GB (Linux Raid) 2) copied partitioning scheme to /dev/sdd 3) set up RAID-1 arrays md0 (sdc1-sdd1) and md1 (sdc2-sdd2) 4) write random data to partitions 5) set up LUKS on md1 (swluks) 6) set up LVM on swluks (80GB /, 375GB /home, 20GB swap) 7) ran setup, chose partitions, installed software 8) setup lilo (mbr, selected /) 9) TRIED to install lilo
I have a raid array level 5 with metadata 1.2 made with mdadm. I put it on /etc/fstab to mount it on boot but it doesn't works because the raid is not detected on boot. I have a /etc/mdadm.conf like this:
But I change manually the metadata version because the 1.02 give me a error. I don't know if it is a bug or what! Beside this. I have to put a line in /etc/rc.d/rc.local to assemble the array.
And after that I already can mount it. Why the array is not detected on boot? Is because metadata type is prior to 1.00? Can I put the line I have on /etc/rc.d/rc.local to assemble the array in another file, that will be executed before /etc/fstab?
I'm somewhat stuck I fiddle with intel ss4200 NAS where I managed to install Slack 13.1 on some spare IDE HDD that was lying around (instead of that crappy 256 MB DOM). Anyway, everything works except one thing...
The setup is: 1x IDE HDD with Slackware on it. 4 x 1TB drives in RAID 5, which when mounted make 3 different logical drives. Everything works, /dev/md0 is created, pvc on top of it with 3 different drives. Well at least until I reboot. After reboot /dev/md0 is not automatically assembled and due to that - lvm stays inactive. Of course, I can make a script and put it in rc.local that will activate and mount what I want but I'm sure that there is one more elegant solution for that.
At the moment I need to issue: mdadm --assemble /dev/md0, vgchange -ay, mount -a and vgchange -an at shutdown. I checked parts concerning LVM in rc.S but I'm clueless. Kernel on the system is 2.6.36, mdadm - v2.6.9 and LVM version: 2.02.64
I'm building a NAS, based on the Intel SS4200. There are 4 drive bays in the machine for use with SATA disks, two of which I plan on filling now, the other two which I plan on filling later. The box also includes an IDE connector to which I will connect an 8GB Disk on Module onto which I will install Slackware. I wish to have all drives in the box show up as one contiguous volume. What partitioning/LVM/RAID configuration can I use which will allow me to:
1. Add a disk and transparently grow the available space of the volume? 2. Replace a disk with a larger disk and transparently grow the available space of the volume? 3. Lose a disk to hardware failure and replace it with a new one with no data loss?
If I use RAID 5, I'm pretty certain I can get numbers 1 and 3 above, but I'm not sure about number 2. The downside is that I'd have to start with 3 disks in the machine, and I'm unsure if adding a 4th disk whose size is larger than each of the 3 starting disks would lead to wasted space. For instance, if I start with three 1TB drives in RAID 5, and then add a 2TB 4th drive, would my available size go from 2TB to 3TB? Or from 2TB to 3.xTB?
Is it important in a RAID 5 setup to have all disks the same size? With LVM, I can certainly get number 1 above, but what about 2 and 3? I know you can use LVM to present many disks or partitions as one contiguous volume, but if I have two 1TB drives in one volume, and only have 300GB of data, then would the second drive remain empty until I broke the 1TB barrier? In this case, it's wasted space from the get go. I suppose another option would be to start with RAID 1 until I can afford a third disk.
When adding the new disk, could I switch to RAID 5 without data loss? I'm planning on maintaining a full mirror of the NAS on some USB disks as a backup, so if configuration changes to the NAS require wiping the disks and restoring from backup, it's not a total loss. However, it certainly makes me nervous to be in a state where only one copy of the data exists, so I'd rather find a solution where I can add and upgrade disks in the NAS without relying on the backup copies.
I am trying to get Slackware 12.2 running on a system with two identical harddiscs using RAID-1, LVM and LUKS.
Here is what I get:
Code:
The system is still the same, however, the results of upgrading or installing 12.2 are different. The system refuses to boot. The screen messages during boot seem to suggest, that the RAID system is "seen" by the system, but the encrypted filesystem is not.
I can boot with the installation DVD, however, and
I have installed several flavors of Linux on this box Ubuntu, Mint and a few others so I figured I would try the most challenging disto Slackware 13.37... I had not trouble getting through the install but when the system boots it just shuts down at some point during the boot process.
I have a Dell Latitude C840 with a nVidia GeForce4 440 Go card and the screen is reported to go to 1600x1200, 32 bpp on Wikipedia. But I am unable to choose that resolution in Slackware. last time i changed the resolution to a higher one i needed to edit the xorg.conf, and change the HorizSync setting to match the monitor. but im unable to find the specifications on the screen for the C840. is there another way of doing this ? mabye a nVidia driver or something ?
Code: # Monitor section # Any number of monitor sections may be present Section "Monitor" Identifier "My Monitor" # HorizSync is in kHz unless units are specified. # HorizSync may be a comma separated list of discrete values, or a comma separated list of ranges of values. # NOTE: THE VALUES HERE ARE EXAMPLES ONLY. REFER TO YOUR MONITOR'S USER MANUAL FOR THE CORRECT NUMBERS. HorizSync 31.5 - 50.0
# HorizSync30-64 # multisync # HorizSync31.5, 35.2 # multiple fixed sync frequencies # HorizSync15-25, 30-50 # multiple ranges of sync frequencies # VertRefresh is in Hz unless units are specified. # VertRefresh may be a comma separated list of discrete values, or a comma separated list of ranges of values. # NOTE: THE VALUES HERE ARE EXAMPLES ONLY. REFER TO YOUR MONITOR'S USER MANUAL FOR THE CORRECT NUMBERS. VertRefresh 40-90 EndSection
I have a Dell latitude D620, suspended to ram and resumed and when I move the mouse the display goes crazy. It looks like an old TV with the horizontal hold messed up. IIRC this runs an intel video card, I'll double check when I get home.