Ubuntu :: Use Raid Volume From Windows And Ubuntu
Apr 25, 2010I currently have a raid setup of 2 1tb hdd's that i can use from my windows install, but in ubuntu i cannot find it. Is it possible to reach a raid volume from both os?
View 7 RepliesI currently have a raid setup of 2 1tb hdd's that i can use from my windows install, but in ubuntu i cannot find it. Is it possible to reach a raid volume from both os?
View 7 RepliesI have a system with a 2TB RAID level 1 installed (2 x 2TB drives, configured as RAID1 through the BIOS). I installed Centos 5.5 and it runs fine. I now added another 2x2TB drives and configured them as RAID1 through the BIOS.
How do I add this new RAID volume to the existing logical volume?
I have v 10.10 running the following (and I still have no idea how I got this all working). Myth, Squeeze server, X10 control system for the house, Virtualbox (where I run some windows stuff) and a general file and print service. It has 5 disk drives attached
1 - system disk
5- external usb to back up data to
2,3,4 - 2 tb drives in a raid config
I just restarted the server and the raid volume has not mounted. I looked in the webmin interface and the device is there but it doesn't seem to have the partitions attached. Again in webmin, the partitions seem to be on each of the drives. I used crash-plan to back up the data and music, but not the video, so I would prefer not to have to rebuild the lot if I don't have to. I really don't want to re build the lot out of my lack of experience with Linux. Is there an easy (ish) way of putting the raid back together to see if it's just dropped its config and the data is in tact (or can be rebuild from two of the three drives).
I am unable to hibernate my computer while using Ubuntu and I figured out the reason--Ubuntu is not using my swap partition. I would follow the existing tutorials on setting up a swap partition after installing Ubuntu, but since the volume uses hardware RAID 0, the swap partition is not assigned a /dev/ entry (like /dev/sdxx) and I am not sure how I can mount it.
Here is what I have:
Code:
I have a raid 5 array formatted with ntfs; My Ubuntu OS is not able to recognize the raid 5 array but my windows 7 OS can. I had this working before when I installed ubuntu via wubi but now since i installed it as a dual boot OS I am having issues trying to mount this raid 5 volume. So far i have tried reinstalling the dmraid, ntfs config manager, and storage device manager however nothing seems to help me recognize my raid 5 array.
P5N32 E-SLI PLUS MOTHERBOARD: RAID 5 ARRAY NTFS with NVIDIA's raid chipset that comes built in with motherboard.
I have a software raid array (in this test case a mirrored set of two 500GB volumes) and I want to move them to another OS installation on the same hardware. (This is testing in preparation for a physical move two arrays onto a single server.) I had the array up and working (surviving reboots), wrote a test backup onto it in a folder.
Shut the machine down, re-installed ubuntu, got it up and running, then installed mdadm, rebooted with the array powered up and ran mdadm --detail --scan , expecting to see mdadm at least find the parts of the array. Instead, I get nothing. I even added -vv to get more verbose output. de nada.
[Code]...
I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.
Code:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
[code]....
I have expanded a HP array by 500GB but am struggling to get Centos (Xenserver installation) to see it.
From the array controller the size of the disk is 1.4TB
Logical Drive: 2
Size: 1.4 TB
Fault Tolerance: RAID 5
Heads: 255
[Code].....
how I get LVM to see the whole disk that is now available (without destoying daya).
After months of using Lenny & Lucid Lynx without issues, I come back to the good existential questions.
I'd like a completely encrypted disk (/ and swap) in addition to the Xp partitions (not that safe but I'll switch completely to Linux once I have solved everything.
1. I create an ext4 partition for /boot
2. One other (/dev/sda7) that I set for encryption,
3. On top of that, I create a PV for lvm2,
4. I add to a VG,
5. I create / & swap in the VG.
However, if I add a hard drive, I will have to encrypt the main partition, add it to the VG & then expand /. So I'll need 2 passwords at boot time to decrypt.
So I'd like to:
-Encrypt the VG directly, it would solve everything but no device file appears for the VG, only the PV and th LV.
-After hours of search, I couldn't find a solution for a single password...
Maybe a hope with a filesystem like btrfs in the future providing encryption, but I'll still have to create a swap partition out of it (or create a file for swap but no hibernation possible).
I've been using Fedora and Ubuntu on a standard PC/laptop (one HDD) for some time, but just as a user. Recently I built a PC with Asus P5K WS motherboard with RAID adapter on board. I've got 5 750GB SATA drives, however hardware restrictions only allows me to create Logical Volume up to 2TB. SO I've created RAID 5 on 3 drives as SysVol and RAID0 on 2 drives for data.
Ran live CD and started to install OS. However I get to point, when system scans for storage devices. It detects smaller volume, but not the SysVol, I want to install the system on. It is offering me to install system on the smaller volume. I've deleted the smaller volume and left one RAID 5 volume and 2 drives. Again the same problem. System detects those 2 drives, but not the volume.
Is there any restriction in Fedora 12 allowing installation on certain LV? Or is it more HW problem?
I'm experimenting on a new 5.7TB raid we got for one of our servers before it goes into production. I'm carving the space up into Volume Groups and Logical Volumes. Below is some sample output:
[root@server newhome]# vgdisplay
--- Volume group ---
VG Name extraid_sdd1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TB
PE Size 4.00 MB
Total PE 476804
Alloc PE / Size 476804 / 1.82 TB
Free PE / Size 0 / 0
VG UUID LJPJVE-fekS-crS8-uugk-l13z-0NG0-FWv3M3
--- Volume group --
VG Name extraid_sdb1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.64 TB
PE Size 4.00 MB
Total PE 953608
Alloc PE / Size 953608 / 3.64 TB
Free PE / Size 0 / 0
VG UUID kzlLN4-PyrX-LYUS-h1Tc-1S9F-jVV0-XU5tcK
Because I created this, I know that the second 3.64tb Volume Group, extraid_sdb1, is composed of two physical volumes, /dev/sdb1 and /dev/sdc1, each one 1.82TB in size. My question is, if I hadn't made this and had to work backward, how could I discover that info? I can see that the second VG is composed of 2 PVs by the "Cur PV" line. But if I didn't know that they are my /dev/sdb1 and /dev/sdc1, how could I break that out, as well as their sizes? If it matters, this system is running FC6.
I'm asking for an advice about the setup of a large volume: I have 2 disks of 1 Tb each and I want to merge them in a single volume/partition. I am in doubt about setting up a LVM, a RAID0 device or both. I know that RAID0 has no redundancy but I will manage a backup on other media, so that I can take advantage of the stripe feature in terms of I/O performance. On the other hand LVM let me to easily manage and expand the volume in a near future. Am I correct? Anyway I don't know if I can ever setup both and in which order. First LVM then RAID, I suppose.
View 8 Replies View RelatedHi,
I am experiencing problems in creating a physical volume on RAID 1 system.
Here is what I did.
#/usr/sbin/pvcreate /dev/md0
the message reads:
"Device /dev/md0 not found (or ignored by filtering)."
The /dev/md0 has a partition type 8e for linux LVM
Can anyone enlighten me?
Thanks
Right now I have a OpenSuSE 11.1 server running on a single hard drive. I want to install the HighPoint RocketRAID 1740 card and utilize RAID 10.I wanted to know if the following process would work ok:
1. Image the current hard drive using clonezilla and remove the drive.
2. Install the RAID card with 4 hard drives of the same make and model as the current drive
3. Create the logical volume
4. Restore the image to that volume
Since I am restoring the image to a RAID volume, is that completely transparent to the OS? Or do I need to do a clean install on that volume and reconfigure everything?
I believe server section is the best when speaking of RAID stuff...
I have the following situation:We have a DELL T3400 with embedded fake raid on it. I dont know exactly how the system was setup (I wasnt here at that time), but the RAID was enabled in bios and while booting, the two harddrives would be seen as members of intel raid volume0 (RAID 1 mirror). I am not sure if the software raid was actually properly configured in Linux (Fedora 9) and if the OS was reconstructing the whole raid or it was just the bios part that was mirroring the /boot or just some parts of it. Frankly I find these hydrid raids very confusing.Some bad disk manipulation from my part caused the server to crash, but I was able to recover and boot just with one hard drive after using fsck.
I decided to get rid of the raid as it's not the right solution for the application we need it for and decided to go for a traditional single harddrive system and to use Ghost for Linux to clone to a spare disk when backups are needed.So I installed the latest Fedora 12 distribution onto another harddrive and disabled RAID in bios (changed from RAID ON to autodetect, which is the only other option).
Here is what I have now:
/dev/sda has the newly installed fedora 12
/dev/sdb is an empty harddrive that I would use as an intermediate
/dev/sdc is the old harddrive member of intel raid volume0
sdb was partitioned into sdb1 sdb2 and sdb3 and I created an ext3 filesystem on sdb2. The hard drive belonging to RAID volume0 (sdc) has a lot of work done on it and I would like to be able to recuperate the files to the new disk (sda). I cannot mount that old harddrive while in fedora 12, as it sees some unknown raid member filesystem on it probably assigned by the intel raid chip.
So I decided to do it from the other side: to boot from raid volume 0, and from there mount a third intermediate harddrive (sdb) onto which I would copy the documents and then mount the same harddrive from the newly installed fedora 12 and copy those documents from that intermediate harddrive.I can mount /dev/sdb2 from fedora 12 fine and copy stuff to and from it, but not when I boot from the RAID volume 0 harddrive (sdc) with fedora 9 on it. It keeps saying that the partition in question (/dev/sdb2) is an invalid block device.I am stuck here, as my knowledge in this sort of things is very limited.If somebody can indicate me how to recuperate files from that old raid harddrive onto the new fedora 12 drive, I would appreciate a lot.
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
View 1 Replies View RelatedAdding a kernel parameter to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nodmraid" (not 100% it should go there grub2 is new to me, but that is another story) I noticed the following error when running update-grub.
ERROR: ddf1: wrong # of devices in RAID set "ddf1_Series1" [1/2] on /dev/sdb No volume groups found
Now I have not a clue where it is getting ddf1_Series1 from.sdf1 is part of a RAID1 group that has mdadm RAID1 > luks > LVM
md6 : active raid1 sdi1[1] sdf1[0]
1465135936 blocks [2/2] [UU]
bitmap: 0/175 pages [0KB], 4096KB chunk
Errors bug me.. as I am new to grub2 was wondering if anyone has an insight into the error / where to investigate Still reading the grub2 / grub manuals.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
This is my third Ubuntu installation, and even with identical hardware, sound volume is dramatically less than under Windows. I turn every slider up all the way, and it is still less than half as loud as Windows.
View 2 Replies View RelatedDoes anyone have any good articles on mounting a Ubuntu volume as an iSCSI share on a windows box? Originally I was just going to use a SAMBA share but it turns out samba has issues with my lan security. So I thought since all I really want to do is create the share on my backup server that an iSCSI device would do. Been using the following article with limited success... [URL]
View 2 Replies View RelatedI have a system with RAid 0 with Windows 7. I want to load Ubuntu on a stand alone disk. I have two disks obviously for my raid o, two storage disks and an extra disk for ubuntu. I have tired to install unbuntu but upon start up I get an error on the disk. I than had to use grub to repair my mbr. My question is how do I make this work where I can have a dual boot system. I have been running ubuntu on my laptop .
View 2 Replies View RelatedI have recently bought a new computer, and would like to take my existing RAID-1 from the old computer to the new computer, as the disks hold many digital photos. The old RAID-1 was in a DELL XPS 400 (and based on the intel 82801GR chip I guess), and the new computer has an intel ICH10R. I understand that both chips require a Windows driver if RAID-1 is needed.
The old disks still reside in the "dead" computer. How can I proceed? I hope the solution is that fakeRAID can be enabled, but I would like to ask the clever and experienced community before I do something that cannot be undone (and maybe lose all my digital photos!). Or maybe you have a better solution?
If you need more information to help me then please tell me to post information; I do not really know much about setting up RAID-1.
I've just installed ubuntu 10.10 (64bit) on my 1TB drive where as I have my windows drives on a seperate 2x 1TB drives in RAID and am not exactly sure how to mount it.
I am not finding it under computer or under NTFS configuration tool and when I click mount under storage device manager it does nothing, although does detect the RAID array as /dev/sda2
Firstly i am using Crunchbang Linux (built on Ubuntu 9.04)
I have Win7 installed on the first HD in the system (500GB) and as such /dev/sda in Linux
I have two 1TB HD's which are in a RAID 1 configuration, built using Windows Logical Disk Manager, in Linux they are /dev/sdb & /dev/sdc
I have my fourth drive (again 500GB) with Crunchbang installed at /dev/sdd
Code:
I'm uncertain if this is the correct forum to ask this question. Please redirect me to the correct forum if necessary. I am attempting to install MS Windows 7 as a guest in VirtualBox inside Ubuntu Karmic Koala 9.10. on a ZaReason Strata 3660 with 4GB of RAM and adequate memory. The machine is correctly partitioned for this event. I am using the Ubuntu Walk Through as a guide: [URl].. All was going fine until VirtualBox asked for the .iso image from the MS cd 'UDF Volume'. There appears to be no .iso file in the UDF Volume.
View 1 Replies View RelatedCurrently I have windows 7 x64 installed on a pair of raided ssd's using an x58 motherboard(ich10r) and I want to duel boot ubuntu x64 (either 9.10 or 10.04) on another harddrive (not part of the raided ssd's). Does anyone know if this can be done? I haven't found anything out there about this. I have tried to install Ubuntu from the CD. It gets to the install screen booting from the CD, but it doesn't let me install or try Ubuntu. I hit enter and nothing happens. I can look through all of the options but can't install or boot into Ubuntu.
View 1 Replies View Related Making the move to Ubuntu entirely but I want to get my 5TB raid function.
I can't seem to get it to recognize.
fdisk -l:
Code:
It works under windows? ( Its GPT I think btw)
Else I can re-partition and format it (its all backed up) but I want to be able to read the raid under windows as well...
I did some linux installation in the past but never get along with it. now I decided to give it another shot because I need it for the information security class I'm taking.
I have a gigabyte ga-p35-ds3 with 2 WD 320GB HD on a raid 0 array (motherboard raid). I have windows 7 x64 already installed on the raid and would like to install ubuntu 9.10 x64 dual boot with it.
The problem is that the installation does not recognize the win7 on the disk, it does recognize the raid as 1 640GB disk but with nothing on it so I can't processed with the installation without loosing the win7 installation with everything on it.
another weird thing is that after I quit the ubuntu installation and restart the computer the raid 0 fail to load. after I shutdown the computer and start it again the raid is up again and the win7 loading ok.
I have to install it in a dual boot with win 7 without reinstalling everything from scratch.
I have two 1.5TB hard drives (neither one are my OS drives) that don't show up under "Places", but are detected under "Disk Utility". I tried to reformat them, but Ubuntu tells me that they are in use (even they are not mounted). gparted also detects them and shows them as being NTFS parition. I have deleted and repartitioned to NTFS as well. This works until I restart my computer. The funny thing is that Windows 7 sees them just fine. More detail as to how this happened below after system specs.
System:
AMD Phenom II x 4 955
ASUS M4A79XT EVO motherboard
8GB DDR3 1333
1 500GB WD SATA drive (dual boot Windows 7 & Ubuntu 10.10)
2 1TB WD SATA drives (extra storage)
2 1.5TB Seagate SATA drives (extra storage, these are the problem children)
Here's how I got here:
Installed a dual boot w/ Windows 7 and Ubuntu 10.10. Everything was fine and ALL drives showed up and were mountable in Ubuntu. I decided to set up a RAID 1 w/ my two 1.5TB drives. I restarted, changed my SATA to be RAID instead of IDE and created my RAID 1. I then realized that through this motherboard's RAID setup, I couldn't have it copy files from one disk to the other to set up the mirror. So, after I rebooted and the RAID started building, I cut the power and unplugged my drives in a desparate attempt to keep my data. I then went back into the bios and set my SATA back into IDE instead of RAID. I was able to back up my data, but this is when the problem started.
Again, Windows 7 sees and uses the drives just fine. I copied the data I wanted from my 1.5TB drives to my 1TB drives and restarted into Ubuntu. But, no 1.5TB drives appeared under Places. I started Disk Utility and confirmed that Ubuntu does actually see the drives. However, it still lists them as being part of a RAID array (I did delete my RAID array properly through the BIOS after backing up my data). I'm not sure why it thinks that and I believe that's my problem. Also, Disk Utility lists a THIRD 1.5TB drive under "USB and Peripherals". Could that be my MB telling Ubuntu that a RAID is still set up even though I deleted the RAID pair?
What I have tried:Reformatted drives via Windows 7 as NTFS. This completed but didn't solve my problem.Repartitioned the drives with gparted as NTFS. This works until I restart my computer. Attempted to reformat under Ubuntu, but it gives me an error saying the drives are busy.Reinstalled Ubuntu (but didn't reformat). Didn't work. What I'm thinking:Flash my BIOS so my MB starts out fresh and hopefully doesn't tell Ubuntu I have a RAID anymore. Reinstall Ubuntu again (this time reformatting my OS drive).
Anyone have ideas as to what's going on? FYI I'm new to Ubuntu.
The system's volume is way too low compared to windows vista set up on the same computer. The master volume (at the top right corner) and the player's volume is set to full and the speaker's volume is almost full. even the sound is just ok, not loud. Why can such a thing happen on fedora 10?
View 10 Replies View Related