Hardware :: Two More Drives (1TB Too) In Order To Build RAID5
Dec 11, 2010
I have a 1TB SATA2 disk in my home Debian server. I'm planning to buy two more drives (1TB too) in order to build a RAID5. Does it make any difference if I use the SATA2 drive with two SATA3 drives? I'm getting the SATA3 with more cache (64MB) at the same price than the old one and I'm not sure if the SATA version or the cache size can impact in the normal behavior of a RAID5
For the troubleshooting of one server (having 73Gb 3HDD, Raid5 of 140Gb). When I check in the Array The Logical Vol Appears as One HDD not Online 0 HDD1 1 HDD2 2 HDD3 0 HDD not showing Online, When we set it for Oneline & save, After restarting it will go off.
Software RAID 5 on 4 2TB caviar green disks. Then 12 partitions or about 500GB, and finally LVM to use the partitions as pv, and create some vg for my needs.
My disks are /dev/sda to /dev/sdd I boot on /dev/sde which is an sd card
I have a CentOS 5 based Linux system with a 3Ware 9550SU RAID card and four 500GB drives set up in a RAID5 array (3 in the array and 1 hot spare).
I want to 'replace' these drives with four 2TB drives without data loss. My server case has a total of 8 drive bays all hot-swap and all attached to the RAID card, this means I have four empty drive bays on the RAID card.
One thought is to put the four new 2TB drives in the empty drive bays, configure them in a new RAID5 array. Then the question is now to I "mirror" the original RAID5 array over to the new one?
This is just a thought though, I am not sure it will work. In short my question to this forum is how do I accomplish this upgrade?.
I am getting really frustrated with trying to get my RAID5 working again. I had a RAID5 array built with 4 of the Western Digital 1.5tb "Advanced Format" drives, WD15EARS. However, when copying 1.5gb dvd encoded files to the drive, I was getting speeds of ~2mb/s. When researching how to make this faster, I came across all the posts about the Advanced Format drives and how that was causing a lot of issues for a lot of people. It looked like the solution was simple enough: partition starting at sector 64 or 2048 or whatever and then recreate the RAID. However, this is not working for me.
Here are my computer specs: Motherboard: Gigabyte GA-EP43-DS3L LGA 775 Intel P43 ATX CPU: Intel Core 2 Duo E8400 Wolfdale 3.0GHz 6MB L2 Cache LGA 775 65W RAM: 4gb DDR2 1066 (PC2 8500) Video card: ASUS GeForce 9600GT 512MB 256-bit Linux: 10.04
I have an old Athlon XP 3000 machine that I keep around as a file server.It's currently got three 1TB drives which I had setup as mdadm raid 5 on FC10. The machine's original drive held the superblock for the raid array and it just had a massive heart attack. I've searched, my biggest source being URL...I can't tell if I can reassemble the superblock info lost with the original hard drive or if I've lost it all...
I've searched about various recommended build orders for audio and video components. Below is what I'm currently using. My original intention for the list was to provide good audio and video support via MPlayer, but recently I've also been playing around with adding voice and video support to Pidgin.
I am running Ubuntu 10.04 LTS as a dual boot with WinXP but I am having problems with drive recognition in Ubuntu. I have four internal hard drives, 1 and 2 are standard master and slave and 3 & 4 are on a RAID card. Windows recognises them all okay every time but I have found through a lot of trial and error that Ubuntu will sometimes mount the drives in the order 1,2,3,4 and sometimes it will mount them in the order 3,4,1,2. No matter how I arrange the fstab file, it only works some of the time. If I load my music library into Banshee when the drives are mounted 1,2,3,4 then the next time I boot, if they are mounted 3,4,1,2 it can no longer find my music library etc.Is there a way to make Ubuntu recognise the drives in the same order every time?
I asked this on an Ubunut forum about a year or so ago but got no response. I have an Asus M2NE-SLI motherboard. All my HDD's are Sata. I have 2 burners - both IDE. For windows, it see's my hard drives in the order I have them setup according to their cable placement on the motherboard (Sata ports 1-3, Sata 4 I have set for external sata). However within Linux - and I mean any linux distro I've tried (Gentoo, Sabayon, Ubuntu, Mint, Fedora, even Gparted) see's my Sata drives completly out of order.
For example:
Sata Port 1 - Primary HDD - 500GB Sata Port 2 - Secondary - 500GB Sata Port 3 - Secondary - 250GB
Windows see's them as drives C:, D:, E: Respectivly Linus see's:
Sata Port 1: SDB Sata Port 2: SDC Sata Port 3: SDA
Or something like that. Its really messed up. Does anyone know what could be causing this? This causes all sorts of problems setting up a dual boot environment because Windows will place its boot loader on the Primary boot drive (Sata Port 1), Grub will install anywhere i tell it to, but because the drive order is messed up it breaks - unless I completely mess around with the drive order in the bios. I've resorted to putting Windows on one HDD, and Linux on an old external Sata for the time being, and use the Bios's Drive Boot Manager to select which drive to use. This is not the ideal way to run it.
is it possible to change the boot order of hard drives? I`ve got two 250gb sata hard drives on my pc and i can`t figure how to change the boot order without physically switching the data cables inside the case.I`ve been into the bios and it won`t let me switch the order there. In one of harddrive I've installed UBUNTU 8.0.4 and other having UBUNTU 10.4. I am assuming I need to change grub/menu.lst file, but I am not sure exact syntex.
I just went out and bought stuff to build a new computer, and among the parts was a Gigabyte ga-890fxa-ud5 motherboard ([URL]). The board has 3 (well, 4, but we'll stick to the 3) main sata interfaces, with 2 slots per interface, allowing 6 sata drives. In slot_0 i put my blu-ray drive, in slot_1 i put my drive that will host the OS and its partitions, and that is in the sata connector pair on the left. The middle sata connector pair (slot_2 and slot_3) i have 2 2tb drives, and in the sata connector pair on the right (slot_4 and slot_5) i have 2 1.5tb drives.
I have no drive failures but just need to recreate a raid5 set as the next free MD disk number. Originally I built a temp OS of debian on a single drive and had 4x2TB drives in a raid5 software array (MD0) this worked fine and allowed me to move all data to it, and remove our old fileserver. I have now pulled out the 4 x 2TB Raid 5 drives and created a new OS on two new 80GB drives, partioned as follows,
MD0 is now 250mb Raid1 as /boot MD1 is 4GB Raid1 Swap MD2 is 76GB Raid1 as /
If I turn off and push back in the 4x2TB drives I cannot see a MD3. I presume I would need to create a MD3 from these 4 drives but I dont want to mess things up as its live data. So im here asking for help, or a bit of hand holding to get it done right.
PS - Its a Debian Lenny 5.0.3 Raid1 fresh install replacing a Debian Lenny 5.0.3 on a single disk.
I'm in love with my Opensuse 11.2. Love my KDE 4.4. The only thing I miss from my Ubuntu installation, is the ability to use Boxee. I would be more than willing to compile Boxee from source. I only have 2 problems with that:
1) I don't know where I can find all the build-deps or what they are for that matter to build Boxee.
2) I'm running on a Netbook. Yes, my measly Intel Atom is no fun for compiling and building.
What are my options/what can I do to get Boxee up and running on 11.2? I've tried searching on build service for an RPM, but I think due to legal restrictions, Boxee can't be on there.
looking for LIBEVENT... configure: error: Package requirements (libevent >= 2.0.10) were not met: In order to build transmission 2.21.I need libeventnew version of transmission,I need to build libevent-dev >= 2.0.10 and installed first.But I can't get any information about building development files for libevent.
as I'm advancing in building some nice rpm I finally wanted to install on of my gems also the build was successful the actual install fails with missing dependencies.
Code:
$ rpm --root /home/sascha/rpmbuild/ -i ./RPMS/x86_64/memcached-1.4.1-2.x86_64.rpm error: Failed dependencies: libc.so.6()(64bit) is needed by memcached-1.4.1-2.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by memcached-1.4.1-2.x86_64
I have an init script running as a special build user which performs an automated build that fails with (Too many open files).I updated /etc/security/limits to allow the special user more open files, but that didn't work - the init script still isn't allowed more open files.Here's a demonstration of the problem;
Trying to install SW 13.1 (on DVD) on the following system: M/B Intel: DX38BT Processor Intel Core 2 Quad Q6700 - 2.66GHz, 8MB Cache, 1066MHz FSB, Socket 775 Memory Corsair Dual Channel 8192MB PC10600 DDR3 1333MHz Memory (4x2048MB) Graphics Diamond Radeon HD 3850 Video Card - Viper, 512MB GDDR3, PCI Express 2.0 P/S Ultra 1000W
My goal is to install the i386 build on one partition and the 64-bit build on another. I have been away from Linux for a while and am sick to death of Win7, want to come home. :-}
Booted on i386 side of DVD, system freezes after a couple of lines that start with ATA2. Does not respond to 3 finger salute, ctrl-c, nothing. Have to press reset. I have tried both huge.s and hugesmp.s kernels
Booted on 64-bit side, comes up fine. I performed the install, selected for automatic lilo install. Lilo install hung but I was able to reboot. I booted off the 64-bit side again, entered the following: huge.s root=/dev/sde3 rdinit= ro It booted fully to the login prompt but the keyboard does not work, no input.
upgraded from karmic through update managerANDnone of of my external drives cd drive or flash drives are picked upad to go back to karmic and will remain there for a whil
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
I am having trouble in configuring xorg.conf. I am running Suse 11.3 desktop on my PC. Also, I have one onboard nvidia graphics 6150SE and I have put one nvidia 8400GS 512Mb in the 16X PCIe slot for the additional seat...
So kindly tell me what should I do now or what things are missing ?? For any further info abt my PC plz tell me to post outputs(specify the commands for the same..)
I recently had issues with the latest version of the Linux Kernels and I got that fixed but ever since that has happened none of my Drives will mount and they aren't even recognized.
I suspect this is not new but I just can't find where it was treated. Maybe someone can give me a good lead.I just want to prevent certain users from accessing CD/DVD drives and all external drives. They should be able to mount their home directories and move around within the OS but they shouldn't be able to move data away from the PC. Any Clues?
i have recently setup and installed Ubuntu 9.04 on a virtulal drive usingVMWare 6.04, installed the desktop gui as well, I need to add other drives for data and loggng, which I did in the VMWare side. I can see the 2 drives in ubuntu, but can not access them, I get he unable to mount location when I try. How can resolve this please as I need these to virtual drives to be used as data drives.
I've used it once before but got fed up with the boot asking me everytime I turned my laptop on because I wasn't using it enough. I have Windows 7 on drive C . I want to keep it on drive C. I have several 1.5TB+ drives, and one of them is not being used. I want to dedicate it to Ubuntu, and be able to do a dual boot with my Windows 7 install. Is this possible? If it is, what about when this drive is not connected to my laptop? Will that mess up the boot process?