Server :: Convert RAID 1 To NON-RAID System In Cent OS 5 System?

Jan 6, 2010

Here is my system: I have dell poweredge 1950 PERC 6 with 300 GB raid system. It has two disks of each 300GB RAID mirrored system. I have few applications and data that reached around 280GB. As you know, poweredge 1950 we can have only two disk.

They are not mission critical. Hence, I wanted to remove the raid system and use as a non-raid system. By doing it, The applications and data can grow upto 600GB. I do not want to loose the data and setup. I am not so clear about RAID system and its conversion.

View 6 Replies


ADVERTISEMENT

Server :: Tur A Non-RAID System To RAID?

Mar 24, 2011

I have a box that doesn't have a Raid controller or a software raid running currently. I would like to make it a RAID 1. Since it seems there isn't any IDE RAID controllers hardly around, I have another HD that is the exact model as the drive currently in the box running CentOS. Can I some how add the second drive and get the box to mirror from here own out? The box gets really hot and I want to be ready for a HD failure.

View 3 Replies View Related

Fedora Servers :: Convert A Running System To RAID Level 1?

Sep 30, 2009

I was recently requested to try and convert a running system to RAID level 1. I did not succeed in this task. However, I am still interested in accomplishing this task in a test environment.What I did was create a RAID device with a device missing e.g.

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2

Once the RAID was up and running I created the /etc/mdadm.conf file so that it would activate the RAID device on boot. Once the device was created I copied all the data from the running root filesystem to the /dev/md0 device. The only directory that I didn't copy was the /proc directory. When I rebooted the machine I had a kernel panic and the system couldn't find /dev/root. I would greatly appreciate it if someone could provide some information/tips regarding the problem.

View 1 Replies View Related

General :: Migrate An Installed Ubuntu System From A Software Raid To A Hardware Raid?

Jun 29, 2011

migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?

View 1 Replies View Related

Server :: Convert Raid 1 To Raid 10?

Jul 21, 2011

I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?

View 2 Replies View Related

Ubuntu :: Converting RAID 1 To RAID 5 System

Jun 12, 2010

I have two 1TB hard drives in a RAID 1 (mirroring) array. I would like to add a third 1TB drive and create a RAID 5 with the 3 drives for a 2TB system. I have ubuntu installed on a separate drive. Is it possible to convert my RAID 1 system to a RAID 5 without losing the data? Is there a better solution?

View 1 Replies View Related

Server :: Error Formatting Raid On Install With Cent Os5.5 And Ubuntu Sever 10.10?

Dec 7, 2010

i have a dell server. its a couple of years old and has a perc 4 raid controller in it. When i do an install it shows the drives configured in the raid 5 configuration. i select the disk and do an auto partition. It goes for about 10min then errors with "error formatting drive volgoup00 the error is serious occurred press enter to reboot" or something like that can remember all the words. It doesn't give any info about the error.

i get the same error on centOs 5.5 and Ubuntu server 10.10

There is a firmware update for the raid controller that i could apply but dell's site was not letting me download it yesterday. Other than that I am not sure what to do. I could try an older version of CentOS 5.4 but i figure someone might tell me how to fix this so i can run the newest version.

View 2 Replies View Related

Server :: Ubuntu 10.04 LTS - RAID 1 Base System Installation

Dec 30, 2010

I'm trying to install a box with Ubuntu 10.04 LTS server with a typical LAMP system in order to replace my old Ubuntu 8.04 server I have at my school to have a "backup" system and also trying to replace NIS authentication with LDAP. Well I'm getting stuck on the first step: installation of base system. I want to build a RAID-1 system with the two 320GB SATA HD's the machine has, I have a little experience in installing RAID because I installed my old 8.04 server with RAID-1 aswell. I boot my box from an USB stick with Ubuntu 10.04 Server 64bit, the systems boots well, asks me about language, keyboard and so, finds the two NIC cards and the DHCP configuration of one of the card is done.

Then it starts the partitioner. One of the HD's already contains three partitions with an installation of a regional flavour of Ubuntu, the other HD only contains a partition for backups, I don't want to preserve all this stuff, so the first thing I do is to replace the partition tables of both HD's with a new one. This is done without problems. Then I go to the first disk, by example, and create a new partition, the partitioner ask me for the size, I write 0.5 GB (or 500 MB), then I select that it has to be a Primary partition at the beginning of the disk, all goes OK.

Once created I go to the "Use as: " line, type Enter and select the option "Partition for RAID volume", when I hit Enter the error appears: the screen flickers in black for a second or two, then it shows a progress bar "Starting the partitioner..." That always get stuck at 47%!!! Sometimes the partitioner allows me progress a little further (for ex. lets me activate the boot bit of the partition, or it allows me to make another partiton, even once the error didn't appeared until the first partition of the second HD!!) but it always get stuck with the same progress bar at the 47%.

I've tried a lot of things: I downloaded again the ISO and rebuilt the USB, same result. Downloaded the ISO and rebuilt the USB from another computer, same result. Unplugged all the SATA and IDE drives except the two HD's, same result. Built a CD-ROM instead of an USB, same result. Downloaded the 10.10 server ISO (not an LTS), and the USB stick can't boot, is another error, but only to try.

When the error appears, I hit Ctrl+Alt+F2 and get into a root prompt, there I kill two processes: /bin/partman and /lib...don'tremember/init.d/35... and then when I return to the first console the progress bar has gone and the install process asks me at wich step I want to return, I hit "Partition disk" and then the progress bar reappears and Stucks immediately at 47%!!! Is the Ubuntu 10.04 LTS server installer wrong?

View 5 Replies View Related

CentOS 5 Server :: Possible To Create RAID SW On Working System

Oct 5, 2009

I'm working in a little company and 2 weeks ago one of our server had a hard disk failure (yes it was a seagate 11) and after passed two days without sleep trying to recover everything (and we did it!!) we took the decision now to use in some of our server a raid sw, so if one HD fail we can continue with our system without losing nothing. Yes I know normally you have to take all the precautions before so this things never happen, but you know I thing if it never arrives you, you always think than you're lucky and it's never going to happen to you but one day you discover reality.

So now this server is working with a Centos and the default HD partitions one boot partition and the LVM. I'm reading everything I'm finding about raid sw and lvm but I don't find if it's possible to create now with the system working a raid sw without having to reinstall all the system. Is it possible to do it ? If not what are my options to make a system backup before reinstalling everything?

View 1 Replies View Related

General :: Recover Raid - LVM On Non System Drives After System Failure

Jan 19, 2011

I have (had) Debian Testing running on a 250GB IDE hard drive, partitioned normally.

I also have 4x 1TB drives in a raid 5 using mdadm, and 2x 500GB drives in a raid 1 also with mdadm.

I put the two arrays in lvm using:

I then used "lvcreate" to make storage/backup 300GB, and the rest went to storage/media (approx. 2TB usable). I put an xfs filesystem on both and mounted them.

All was working fine until the system drive shorted out and died on me this morning. As far as I can tell, all my other drives and everything else is fine. I do a daily rsnapshot of the filesystem, which of course is residing on storage/backup (stupid, I know). So I have full backups of everything, but I'll have to put a new hard drive in and reinstall Debian before I can restore everything.

I've reinstalled before and simply reassembled mdadm arrays and remounted them before with no problems, but this is the first time I've used lvm, so I'm not sure what I have to do to restore everything. Is it as simple as reinstalling the system then doing a:

View 4 Replies View Related

Server :: Unable To Mount Additional Harddrive From RAID Volume System

Apr 8, 2010

I believe server section is the best when speaking of RAID stuff...

I have the following situation:We have a DELL T3400 with embedded fake raid on it. I dont know exactly how the system was setup (I wasnt here at that time), but the RAID was enabled in bios and while booting, the two harddrives would be seen as members of intel raid volume0 (RAID 1 mirror). I am not sure if the software raid was actually properly configured in Linux (Fedora 9) and if the OS was reconstructing the whole raid or it was just the bios part that was mirroring the /boot or just some parts of it. Frankly I find these hydrid raids very confusing.Some bad disk manipulation from my part caused the server to crash, but I was able to recover and boot just with one hard drive after using fsck.

I decided to get rid of the raid as it's not the right solution for the application we need it for and decided to go for a traditional single harddrive system and to use Ghost for Linux to clone to a spare disk when backups are needed.So I installed the latest Fedora 12 distribution onto another harddrive and disabled RAID in bios (changed from RAID ON to autodetect, which is the only other option).

Here is what I have now:
/dev/sda has the newly installed fedora 12
/dev/sdb is an empty harddrive that I would use as an intermediate
/dev/sdc is the old harddrive member of intel raid volume0

sdb was partitioned into sdb1 sdb2 and sdb3 and I created an ext3 filesystem on sdb2. The hard drive belonging to RAID volume0 (sdc) has a lot of work done on it and I would like to be able to recuperate the files to the new disk (sda). I cannot mount that old harddrive while in fedora 12, as it sees some unknown raid member filesystem on it probably assigned by the intel raid chip.

So I decided to do it from the other side: to boot from raid volume 0, and from there mount a third intermediate harddrive (sdb) onto which I would copy the documents and then mount the same harddrive from the newly installed fedora 12 and copy those documents from that intermediate harddrive.I can mount /dev/sdb2 from fedora 12 fine and copy stuff to and from it, but not when I boot from the RAID volume 0 harddrive (sdc) with fedora 9 on it. It keeps saying that the partition in question (/dev/sdb2) is an invalid block device.I am stuck here, as my knowledge in this sort of things is very limited.If somebody can indicate me how to recuperate files from that old raid harddrive onto the new fedora 12 drive, I would appreciate a lot.

View 3 Replies View Related

Ubuntu Servers :: RAID-5 Recovery (spare/active) / Degraded And Can't Create Raid ,auto Stop Raid [md1]?

Feb 1, 2011

Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.

Now the array is (logically) no longer able to start:

mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]

I was able to examine the disks though:

Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....

Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.

View 4 Replies View Related

Server :: Convert Single Drive Ubuntu Server To RAID 0

May 28, 2010

I am running single drive Ubuntu server 9.10 with a lot of software. Now I want to add one more disk (same size and type) and to convert this to RAID 0 without need of reinsallation. Is it possible and if yes how? I didn't find nothing for RAID 0. It sounds simple, but probably is not.

View 4 Replies View Related

Server :: How Long Does Hardware Raid Card (raid 1) Take To Mirror 1 TB Drive (500gb Used)

Mar 22, 2011

How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?

View 1 Replies View Related

CentOS 5 Hardware :: Connect A RAID Box To The Server Via LSI 8880EM2 RAID Controller

Aug 3, 2010

I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.

The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).

I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.

Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.

View 3 Replies View Related

Ubuntu :: System Automatically RAID Itself?

Sep 6, 2010

When I installed this system (Xubuntu 9.04 x64) several months ago, I had two identical SATA hard drives, but I didn't do a RAID1 mirror then because I didn't want to wipe out my old OS (FreeBSD) on the second drive in case I needed to retrieve something from it. So I installed Xubuntu on the first drive (sda) and for all that time, the second drive (sdb) has been running but unused, and fdisk showed that the FreeBSD partition was still there.

A couple weeks ago, my separate backup server failed, so as a short-term backup I did a 'dd if=/dev/sda of=/dev/sdb' to make an exact copy of my Xubuntu install on drive A upon drive B (the former FreeBSD drive), then mounted /dev/sdb1 on /mnt to make sure it worked successfully. So far so good, but I still didn't set up any RAID stuff.

Several days later, I needed to reboot because of a security upgrade, and when I logged out, the GUI froze up. Thinking not too much of it, I restarted the system, and it came up fine. But the next day, I discovered that I was missing several days worth of email and recent files. In fact, everything had been reverted back to a state from several days earlier --- I think from the day I copied the first drive to the second one. Files I created in the interim were gone, and files I deleted in the interim were back. It was as if the first drive was 'restored' from the second one without my knowledge.

Doing some testing now, I find that if I create a new file on /dev/sda1 and then mount /dev/sdb1, the file also exists on sdb1. It's as if they're acting as a RAID1 mirror, without my telling it to do so. Could it just decide to do RAID1 because it sees there are two identically partitioned drives? That seems dangerous. And if they were really in a RAID mirror, why would it let me mount them separately? It's very strange.

I don't mind if it's suddenly decided to do RAID, but I want to make sure it's not going to 'restore' a more current filesystem from an older one again, if that's indeed what happened.

View 3 Replies View Related

Debian :: System Rescue With Hardware RAID

Mar 1, 2016

I had a RAID problem on an HP Proliant server due to a failing disk. When I changed the disk, things got complicated and RAID seemed to be broken. I put back the old disk, repaired the RAID, then changed put the new disk again and all returned to normal except the system doesn't boot. I am stuck at grub stage (the grub rescue prompt). I grabbed a netinst CD and tried to rescue it, at some point, the wizard correctly sees my two partitions sda1 and sda2 and asks wether I want to chroot to sda1 or sda2. I had red screen errors on both.

The error message said to check syslog, syslog says can't mount EXT4 file system beacuse bad superblock. I switched to TTY2 (Alt-F2) and tried fsck.ext4 on sda1 (I think sda2 is the swap because when I ran fsck on it it said something like this partition is too small and suggested that it could be swap) it says bad superblock and bad magic number. I tried e2fsck -b 8193 as suggested by the error message but that too didn't work (I think the -b 8193 is for trying to get he backup superblock).

The RAID is as follows : One RAID array of 4 physical disks that are grouped into one Logical Volume /dev/sda, so the operating system only sees one device instead of four (4 disks).

View 0 Replies View Related

Fedora :: LVM And RAID - Adding 2 More Disks To System?

Sep 25, 2010

I have implemented LVM to expand the /home partition. I would like to add 2 more disks to the system and use raid 5 for those two disks plus the disk used for /home. Is this possible? If so, do I use type fd for the two new disks and use type 8e for the existing LVM /home disk? Or do I use type fd for all of the raid disks?

View 1 Replies View Related

Ubuntu :: RAID Data Drive For System?

Apr 7, 2010

I am trying to create a RAID data drive for my system but I am having setting it up since I am a total linux noob.

The system has 3 physical HDD-s:
1. 320 GB (has functional Ubuntu 9.10 installation) attached to a PCI SATA card
2. 2TB on motherboard
3. 2TB on PCI SATA card

I want to create a software RAID1 of disks 2 and 3. So far I have used the Palimpsest Disk Utility:
- Created a GUID Partition table on both disks (2, 3)
- Used File -> New -> Software Array, made sure both my drives were included
- Once Palimpsest listed the RAID Drive as a Software RAID Array, I told it to create Ext3 filesystem on it

Well.. at least thats what I thought I did. At this point I have been able to mount the RAID drive and put files on it. However when I look at its information in Palimpsest, I am told that the drive is not partitioned. Both RAId components /dev/sda1 and /dev/sdc1 are reported to be in Sync, but the RAID Drive's own state is 'Running, Resyncing @ 45%' (and lowly growing).

My questions are: Is this a normal setup or did I do something incorrectly? Why is the drive reporting to have no partition? And howcome I can use it if it does not have a partition? I have found the command line based configurations to be a tad too confusing to follow, so I have tried to stick to graphical tools - is this a hopeless cause in Ubuntu or is it possible to achieve what I want to do without command line? I will list some info on my disks below - perhaps this offers more insight to those of you more familiar with Linux.


Code:
mindgamer@mind-server:~$ sudo lshw -C disk
[sudo] password for mindgamer:
*-disk:0
description: ATA Disk
product: WDC WD3200BEVT-0
vendor: Western Digital

[Code]...

View 7 Replies View Related

Ubuntu :: Hardware RAID System Very Slow

May 2, 2010

As the title says, my raid system is very very slow (this is for raid5 6x1Tb SAMSUNG HD103UJ):
Code:
leo@server:~$ sudo hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 1094 MB in 2.00 seconds = 546.79 MB/sec
Timing buffered disk reads: 8 MB in 3.16 seconds = 2.53 MB/sec
It's impossible and I've tried every configuration (raid5,raid0,Pass Through), and I become nearly exactly the same speed (with restarts, so I'm sure I'm really talking with the volumes I've defined).

One can compare it with the system drive :
Code:
leo@server:~$ sudo hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 1756 MB in 2.00 seconds = 878.12 MB/sec
Timing buffered disk reads: 226 MB in 3.01 seconds = 75.14 MB/sec

Some hardware/software infos :
Code:
Raid card
Controller Name ARC-1230
Firmware Version V1.48 2009-12-31

Code:
Motherboard
Manufacturer: ASUSTeK Computer INC.
Product Name: P5LD2-VM DH

Code:
leo@server:~$ uname -a
Linux server 2.6.31-20-server #58-Ubuntu SMP Fri Mar 12 05:40:05 UTC 2010 x86_64 GNU/Linux

Code:
IDE Channels .....
I'm a bit lost now. I could change the motherboard, or some bios settings.

View 7 Replies View Related

Ubuntu Installation :: Can't Install 10.04 On Raid 1 System?

Jun 22, 2010

I tried to install new ubuntu on Intel raid 1 system but it said that:

Quote:

The ext4 file system creation in partition #1 of Serial ATA RAID isw_chibcceegh_Volume0 (mirror) failed.

My config is:
P5Q Pro
2x500 GB Seagate HDD
Intel Raid 1

Boot ubuntu from USB Drive (Wonder does this cause the problem?)

View 9 Replies View Related

Ubuntu :: Upgrade System While Using Software RAID?

Jun 25, 2010

Is it possible to upgrade an Ubuntu install which is using software RAID?

I ask because I'm currently using Fedora, and one of my huge frustrations with the distro is that if you're using software RAID, the installer explicitly notes you can't upgrade to future versions. Instead, it requires that you wipe your entire drive and do a full reinstall to acquire the next release. I presume Ubuntu probably uses many of the same libraries to support software RAID as Fedora, so does Ubuntu also have this limitation?

View 3 Replies View Related

Ubuntu :: Encrypt System Drive - RAID 1

Feb 22, 2011

1- I want to encrypt my system drive
2 - I want to make my system drive a RAID 1
3 - I have two 1 T hard drives, but I would like to be able to add two more 1 T hard drives for a total of 4 1T drives in the future. How do I do that?

I am using ubuntu 10, desktop edition. I have not added the OS yet, I am building this weekend.

View 2 Replies View Related

Hardware :: System Device Name For A RAID Of SAS Drives?

Jun 29, 2010

For SATA drives in AHCI mode, the names are /dev/sdX, what about a RAID-0 or RAID-1 array of SAS drives? Are they the same?

View 8 Replies View Related

Software :: Backup On RAID Configured System?

Jul 26, 2010

I have used gparted for backup of my system, but now I have RAID configured and gparted didn't recognize my drives: Disk /dev/md0 doesn't contain a valid partition table

I have 2x500Gb HDDs configured in RAID and primary and logical partitions set by LVM (on Debian lenny 5.03)

Code:
fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000cb4b9

[Code]...

What program can I use to make a backup of my installation into /dev/dm-1 (this drive is mounted as /home)?

View 1 Replies View Related

Fedora Installation :: F12 Won't Boot On LVM/RAID System After Upgrade

Dec 9, 2009

I upgraded from F10 to F12 using preupgrade. The upgrade itself completed with no errors, but I'm unable to boot afterward.

Symptoms:...Grub starts, the initramfs loads, and the system begins to boot. After a few seconds I get error messages for buffer i/o errors on blocks 0-3 on certain dm devices (usually dm0 and dm2, but I can't get a shell to figure out what those are). An error appears from device-mapper that it couldn't read an LVM snapshot's metadata. I get the message to press "I" for interactive startup. UDEV loads and the system tries to mount all filesystems. Errors appear stating that it couldn't mount various LVM partitions. Startup fails due to the mount failure, the system reboots, and the steps repeat.

Troubleshooting done:...I have tried to run preupgrade again (the entry is still in my grub.conf file). The upgrade environment boots, but it fails to find the LVM devices and gives me a question to name my machine just like for a fresh install. I also tried booting from the full install DVD, but I get the same effect. Suspecting that the XFS drivers weren't being included, I have run dracut to create a new initramfs, making sure the XFS module was included. I have loaded the preupgrade environment and stopped at the initial GUI splash screen to get to a shell prompt. From there I can successfully assemble the raid arrays, activate the volume group, and mount all volumes -- all my data is still intact (yay!). I've run lvdisplay to check the LVM volumes, and most (all?) appear to have different UUIDs than what was in /etc/fstab before the upgrade -- not sure if preupgrade or a new LVM package somehow changed the UUIDs. I have modified my root partition's /etc/fstab to try calling the LVM volumes by name instead of UUID, but the problem persists (I also make sure to update the initramfs as well). From the device-mapper and I/O errors above, I suspect that either RAID or LVM aren't starting up properly, especially since prior OS upgrades had problems recognizing RAID/LVM combinations (it happened so regularly that I wrote a script so I could do a mkinitrd with the proper options running under SystemRescueCD with each upgrade).

I have tried booting with combinations of the rootfstype, rdinfo, rdshell, and rdinitdebug parameters, but the error happens so early in the startup process that the messages quickly scroll by and I just end up rebooting.

System details:4 1-TB drives set up in two RAID 1 pairs. FAT32 /boot partition RAIDed on the first drive pair. Two LVM partitions -- one RAIDed on the second drive pair and one on the remainder of the first drive pair. Root and other filesystems are in LVM; most (including /) are formatted in XFS.

I've made some progress in diagnosing the issue. The failure is happening because the third RAID array (md2) isn't being assembled at startup. That array contains the second physical volume in the LVM volume group, so if it doesn't start then several mount points can't be found.

The RAID array is listed in my /etc/mdadm.conf file and identified by its UUID but the Fedora 12 installer won't detect it by default. Booting the DVD in rescue mode does allow the filesystems to be detected and mounted, but the RAID device is set to be /dev/md127 instead of /dev/md2.

The arrays are on an MSI P35 motherboard (Intel ICH9R SATA chipset) but I'm using LInux software RAID. The motherboard is configured for AHCI only. This all worked correctly in Fedora 10.

View 3 Replies View Related

Fedora :: Heavy Software Raid I/o Causes System Lockup

May 9, 2011

Here is the scenario: I load hundreds of h264/ac3 video clips from my HD video camera onto a four disk software raid stripe under Fedora 14 running the 2.6.38.3-15.rc1.fc15.x86_64 kernel. I then run a python script I've written to do batch process transcoding on the clips so that I can then edit them in cinelerra. FWIW, the transcode command is: nice ffmpeg -i infile.MTS -y -deinterlace -s hd720 -r 29.97 -acodec pcm_s16be -ar 48000 -ac 2 -vcodec mjpeg -qscale 2 out.mov. The python script batches these commands until they are all completed.

The system is capable of 16 concurrent threads and I set up 8 concurrent run queues to do the processing. Occasionally (about one in every 7 runs), the system will hard lockup during the batch. The X screens are still visible, but the mouse and keyboard are dead and the onscreen clocks freeze. The network io led still flashes but the hard disk io led is on solid at that point, leading me to theorize a kernel bug in the software raid handling. I am not able to remote log in and the system must be then hard rest with the reset button. After the reset there is no useful information in the system logs. this is a pretty uber SMP machine: two westmere quad core cpus, 24GB of ram and a four WD disk software stripe array running under the Intel ICH10 controller. I also have a RocketRaid controller installed but no disks are currently attached to it, and I disabled its bios in the setup. Additional disks under the ICH10 are an SSD system disk, and a sata insertion caddy that I usually have occupied with a disk I do backups to. So, all 6 channels on the ICH are usuallt occupied. General output from hdparm -i is:

[code]....

View 6 Replies View Related

Ubuntu Installation :: Possible To Have A Dual Boot System And Have It In RAID 0 ?

Mar 21, 2010

Is it humanly possible to have a dual boot system and have it in RAID 0

I have three Hard drives.....
The three drives that I have are250GB IDE
1TB Sata
1TB Sata

And I edit . produce a lot of music/videos

What I am looking to do it have my 250Gb IDE drive as my Operating system (partitioned in two or something, one for Windows Another for Ubuntu)

And I would like when I am on either Operating System (ubuntu/windows_7)
its see's ,my computer as having a 2TB HDD (setting the two 1TB HDD's in RAID 0 )

So in Laymans... 250GB for OS's and 2TB (2 x 1TB HDD) as DATA drives

View 1 Replies View Related

Ubuntu Servers :: Moving Raid System To A New Host?

Aug 3, 2011

A while back I successfully set up a software raid (mdadm) install of ubuntu server on a cheap Compaq machine. I'm pretty sure it's running an nForce 430 chipset. Anyway, I put 3 2TB drives in it and set up two arrays: a small raid1 array for /boot and a raid5 array occupying the rest of the drives.

One point of note here is that the raid1 array never seemed to "take hold": the partition is empty, and the system has been booting entirely from the raid5 array. I'm not sure if this is relevant or not: just throwing it in there in case it is.

Anyway, I recently upgraded my desktop PC, and planned to hand down the Q6600 and DG33TL motherboard from it to the server. I did the hardware upgrade, but now the system will not boot. My question is: why.

Boot fails with the following error message:

"No bootable device -- insert boot disk and press any key"

This to me suggests that it's not even getting to the point of running grub2: the mobo itself just isn't finding anything it considers to be bootable.

I've tried all combinations I can think of of configuring the SATA support in the BIOS (modes include IDE, ACHI and RAID, all in combination with UEFI boot enabled and disabled). The system does detect the three drives: it lists them in the BIOS config screen.

I've booted from an ubuntu rescue USB drive and I'm able to see the drives and even assemble the array. I can mount the LVM partitions therein and do what I'd normally be able to do with them.

Everything *seems* fine: it just can't boot.

Grub2 is something of a mystery to me, so I'm not exactly sure what I need to do to get it booting again, if indeed it is a problem with grub.

View 9 Replies View Related

Debian :: Raid - No Booting / Reboot The System Does Not Boot?

Nov 5, 2010

There seems to be a problem with Raid on Debian. I got a new Fujitsu Primergy TS 100 S1 server, with hardware Raid (and 2 disks) installed everything nicely over the net including GRUB - but when it comes to reboot the system does not boot.

Is there anybody here who knows about the problem?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved