Ubuntu Servers :: Mounting Large (12TB) SCSI Attached RAID Array (formatted With Ntfs) ?
Feb 16, 2010
I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":
sudo fdisk -l
Disk /dev/sda: 164.7 GB, 164696555520 bytes
255 heads, 63 sectors/track, 20023 cylinders[code]........
View 2 Replies
ADVERTISEMENT
Jun 6, 2011
I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:
The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.
As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know
1) why this broke in an upgrade and is it a known problem?
2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.
View 9 Replies
View Related
Aug 1, 2011
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
[code]....
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
View 6 Replies
View Related
Sep 27, 2010
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB
Am I missing something or making a false assumption somewhere?
View 4 Replies
View Related
May 11, 2011
Round two:I am trying to install a RAID 1 array on my system. I already have another RAID 1 array in there. I am using the BIOS RAID option to set up the array.Here's what dmraid -r tells me:
# dmraid -r
/dev/sda: pdc, "pdc_bajfedfacg", mirror, ok, 3906249984 sectors, data@ 0
/dev/sdb: pdc, "pdc_bajfedfacg", mirror, ok, 3906249984 sectors, data@ 0
[code]....
View 9 Replies
View Related
Jun 23, 2009
I need to mount my raid array on CentOS 5.2 samba server.
Here are my hardware specs:
Motherboard: Tyan S2510 LE dual PIII
CPU's: Intel PIII 850ghz socket 370
Memory: 4 gig Crucial 133 ECC SDRAM
OS: 2 x'x IBM Travelstar 6.4 gig 2.5 hard drives, (low heat/noise)
Storage: 4 x's Seagate 500 gig IDE 7200 rpm
RAID controller: 3Ware 7500-12 controller, (RAID 5) (66 mhz PCI bus)
NIC: 3COM 3C996B-T gigabit NIC, (66 mhz PCI bus)
I have the 2 IBM's set as RAID 1, (mirror) and the 4 Seagates as RAID 5, (1.5 TB) I have installed the OS with minor problems, (motherboard doesn't like the 2.6.18-128.1.14.el5 kernel, removed it from my grub.conf).
My problem is mounting the RAID array. I have done the following:
formatted with fdisk;
fdisk /dev/sdb
Then formatted with the following command;
mkfs.ext3 -m 0 /dev/sdb
The hard drive was formatted with the ext3 files system, but I have mounted it as an ext2 file system as I don't want 'journaling' to occur. I then edited my /ect/fstab like this: .....
Then: mount -a
When I go into my "home" directory and type ls, I get the following:
[root@hydra home]# ls -l
total 24
drwx------ 2 zog zog 4096 Jun 23 15:50 zog
lrwxrwxrwx 1 root root 6 Jun 23 15:46 home -> /home/
drwxrwxrwx 2 root root 16384 Jun 23 15:34 lost+found
drwxr-xr-x 2 root root 4096 Jun 23 17:18 tmp
Why my home directory is showing under home?
View 5 Replies
View Related
Dec 27, 2010
let's say this system has 3 hard drives. Drive #1 and #2 are RAID 0 and Windows7 lives there. It is a hardware RAID, not software.
On Drive #3 Ubuntu has been installed using WUBI - it boots up and works okay - but it does not see the RAID array.
Do I just need a linux driver to be able to see & mount my "Windows" RAID0 array? Or is this even possible? Can anyone point me in the right direction?
View 1 Replies
View Related
Sep 1, 2011
I've been using Ubuntu on my fileserver for quite a while now, and I've always really had this problem, but I want to finally address it and get it fixed. At seemingly random points (when my fileserver is under stress - typically while I'm writing lots of data to it), my fileserver will crash. It generally completely crashes, not responding to any further file requests or any of my SSH commands, and must be reset hard (typically by flipping the power switch). After such an occasion, I end up with some corrupted files. It seems to corrupt a large array of files (it's not an isolated issue - for example, it corrupts files that were not being accessed anywhere near the time it crashed, including files that had never been accessed during that period of uptime). The files don't get completely smashed, but they're definitely corrupted (artifacts in images, skips in audio and video files, often complete failure of binary files such as virtual hard drives or disc images).
I'm using Ubuntu Server 11.04, but similar issues to this happened for me in 10.04 LTS (in fact, I upgraded to try to solve them). I'm using mdadm to create an 8-drive raid6 array. The drives are 1.5 TB each, mostly Samsung HD154UI, but with a WD drive in there too (sorry, I can't find the model number at the moment). The hard drives themselves appear to be working fine - SMART reports no issues with any of them, mdadm says they're all up, and I have no reason to believe that the drives are at fault here (although I can conduct further tests if necessary). I've posted about this problem before here and here. In these cases, the issues seemed to be with XFS - in fact, I switched from XFS to ext4 on my RAID array because I simply believed XFS to be unstable. Unfortunately, this issue occurs with ext4 as well, so I'm fairly certain it's an mdadm issue. Here is the output of "cat /proc/mdstat", for those interested:
[Code]....
View 9 Replies
View Related
Mar 26, 2011
I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).
My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:
Code:
(parted) p
Model: Areca ARC-1280-VOL#00 (scsi)[code].....
What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.
View 8 Replies
View Related
Jul 22, 2011
i was fooling around with ATI graphics settings and after i went to reboot my Desktop it kept comming up with odd symbols instead of a log in screen, then i started to try and do things with a LiveCD and screwed things up even more so now any time i attempt to boot, even into recovery, it stops at the line
[2.573600] sd 6:0:0:3: [sde] Attached SCSI removable disk
i have tried to do multiple fixes with no avail and i am not very savvy with linux as i have only been using it under a year, so it would be awesome if anyone could help this is 10.10 Maverick and the 32bit version,
View 2 Replies
View Related
Jun 4, 2010
I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.
The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:
1304 -rw-rw-rw- 1 spoorhobby spoorhobby 1327274 2010-05-15 22:10 DSCF1895.JPG
? -????????? ? ? ? ? ? DSCF1896.JPG
? -????????? ? ? ? ? ? DSCF1897.JPG
? -????????? ? ? ? ? ? DSCF1898.JPG
[Code].....
Both mirror disks are still functioning and I can still add/delete files, from the server, from other LINUX systems and from other Windows systems via samba.
I did make a full backup on a different server.
View 9 Replies
View Related
Jun 11, 2010
so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.
Here is me trying to do random stuff
Code:
administrator@3dslice-host:~$ sudo mdadm --stop /dev/md0
[sudo] password for administrator:
mdadm: metadata format 00.90 unknown, ignored.
mdadm: stopped /dev/md0
administrator@3dslice-host:~$ sudo mdadm --add /dev/md0 /dev/sda1
mdadm: metadata format 00.90 unknown, ignored.
[Code]...
View 2 Replies
View Related
Sep 7, 2010
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
[code]...
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.
View 4 Replies
View Related
Jun 6, 2011
I have 10.04 server with a linkstation raid 5 attached via usb. What is the best way to monitor the drives for a failure? Its at a remote site
View 2 Replies
View Related
Jun 15, 2011
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output:
$ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G
Create a RAID set with ISW metadata format
RAID name: BackupFull6
RAID type: RAID5
RAID size: 5589G (11720982528 blocks)
RAID strip: 64k (128 blocks)
DISKS: /dev/sde, /dev/sdf, /dev/sdg
About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s
*** Group superset isw_cdjhcaegij
--> Subset
name: isw_cdjhcaegij_BackupFull6
size : 3131048448
stride : 128
type : raid5_la
status : ok
subsets: 0
devs : 3
spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is:
Description: Ubuntu 10.04.2 LTS
Release: 10.04
Codename: lucid
View 8 Replies
View Related
Jun 26, 2011
Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:
Quote:
the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery
If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:
Quote:
mount: special device /dev/RAIDVG1/RAIDLV1 does not exist
I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.
View 9 Replies
View Related
Feb 20, 2011
I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)
I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan
The new lines in this file are:
ARRAY /dev/md9 level=raid1 num-devices=2 UUID=470fb8a6:45561fe0:ebda4a02:9ba7a1ed
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=f351fbba:c704a4b2:ebda4a02:9ba7a1ed
ARRAY /dev/md8 level=raid1 num-devices=2 UUID=c6ccec17:2274588e:ebda4a02:9ba7a1ed
To check that the mdadm.conf is fine I have stopped the new arrays:
[Code].....
View 7 Replies
View Related
Mar 7, 2011
Short story: I have a problem with one of my services (mediatomb) - it requires an md RAID array to be mounted in order to start, because it uses files from it. $remote_fs is added by default to the "Required-Start" line of the init script, so I thought that this should be enough. However, the mediatomb service fails to start on boot, but starts just fine when I execute "service mediatomb start" later. The array is entered in /etc/fstab and is automatically mounted on boot.
Long story...
This is my file server (Ubuntu Server 10.10), which has a raid array created with mdadm (mounted on /z), and the root filesystem is located on an USB thumb drive. I've installed mediatomb, but I wanted to put its database files on the raid array instead of the root fs, so I've symlinked /var/lib/mediatomb (the default path) to /z/mediatomb on the array. This is because the mediatomb DB is supposed to be updated fairly often, so I didn't want it to stay on the flash drive.
Problem is, the mediatomb service can't start on boot - in /var/log/mediatomb.log, it says "2011-03-07 19:22:47 ERROR: /var/lib/mediatomb : 20 x No such file or directory". As I said, it works fine when manually started later...
This is the fstab entry for the raid array code...
View 1 Replies
View Related
Mar 12, 2010
I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.
Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.
The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.
Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).
The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.
The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.
The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.
Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?
I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.
View 2 Replies
View Related
Feb 26, 2011
Using a fresh copy of server 10.04 im trying to simulate a failed raid array on a pair of 2tb disks. Here is the procedure i have been following so far:
- Remove the dead disk partitions from each of the raid 1 arrays (substitute the correct md devices and partitions)
- mdadm /dev/md0 -r /dev/sdb2
- mdadm /dev/md1 -r /dev/sdb3
[code]....
I get an error here that sfdisk does not support gpt (guid partition table). I thought sfdisk did support gpt? It says to use parted, but i cant find a command that copies a partition table over from another disk in parted documentation. Any suggestions? I suppose i could make the partitions manually, but im writing a procedure for people who arent that technical and i need it to be simple enough to be run in my absence. manually building the partitions would be too hard for them.
View 2 Replies
View Related
Oct 14, 2009
We just setup a HP DL380 with CentOS 5, and ran all of the latest updates.I am trying to attach a Compaq array (no model number) that is SCSI attached. I can see the array from the bios and created a raid group on it from there. However, from LVM and lvscan, I don't see it at all. I checked dmesg and there are no errors.Also, interestingly /proc/scsi/scsi is empty.
View 1 Replies
View Related
Mar 25, 2011
I have been trying to use fstab, writing a script in /etc/init.d to mount my external ntfs usb drive. I have had absolutely no luck and I have tried just about every solution I could find on the web except for writing a udev rule which I have never done so I am not exactly sure how.
My solution for the interim is to put the mount command in the rc.local file. That works, but I don't understand why I can use fstab to mount it. Putting it in the fstab gives me errors like "unknown file system" or just "An error occurred during mounting of drive" and then the booting stops. I tried using both ntfs and ntfs-3g.
View 5 Replies
View Related
Mar 5, 2010
I have a dual-boot setup with winXP and openSUSE 11.2. I have both XP and SUSE partitions on a 160g HDD and then a Hardware RAID 1 array of 2 320g HDDs. The RAID arrray contains all my media/data files on an NTFS partition. For some reason SUSE shows both individual 320g disks mounted in the file system, but not the RAID array. If I attempt to browse either of the disks, I get an error and can't view them. How do I mount the RAID NTFS partition?
View 2 Replies
View Related
May 31, 2011
I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:
[Code]....
My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?
View 3 Replies
View Related
May 29, 2010
Trying to boot Ubuntu live CD 10.04 and getting some problems. I will get to the section where I have to pick a language and then choose the option to run from the CD. It will start to load and then it will hang up at "[sdb] attached scsi removable disk". I unplugged all of my external HD's. I burned the 32 bit version because it was recommended on the site.
Asus
Pentium Dual core CPU E5400 @ 2.70 GHz
5 Gigs of RAM
Current clock speed 2700MHz
Running Windows 7
View 5 Replies
View Related
May 11, 2010
I'm installing my first new Dell SCSI controller card (PERC 6) and storage array (MD1000) into my first RHEL server. I've done this plenty of times in a Windows world, but never in Linux. I know the BIOS will pick up the SCSI card and I can go into that during the boot screen to set up how i want the array (RAID5), but once Linux comes up, will it automatically find and mount the drives, or do I have to do something?
View 9 Replies
View Related
Oct 12, 2010
I have a storedge 3511 array attached to a centos 4 system and need to upgrade to redhat 5. 1) How can I find out how the array was attached to the system? and/or 2) What do I need to do during the install for the array to be recognized?
fdisk -l output:
WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sde: 4998.3 GB, 4998352076800 bytes
255 heads, 63 sectors/track, 607681 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 267350 2147483647+ ee EFI GPT
View 3 Replies
View Related
Jan 6, 2010
I am using Centos 5.2, with the latest 3ware driver installed. The 3ware bios shows all the external disks (its a 9690SA-8-E), they are setup as Jbod, but when I login to Centos I do not see my attached drives only my local ones. When using fdisk -l under root I see my attached disks but not my external array:
[root@sf ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 7.6G 749M 6.5G 11% /
/dev/md2 190M 24M 157M 14% /boot
/dev/mapper/vg01-scratch
[Code]...
View 3 Replies
View Related
Jun 30, 2010
So I currently have OSX and Windows 7 install on my hardrive - I would like to add 10.04 in the mix, however it will not let me resize my Windows partition because it does not recognize it as ntfs. It will not let me mount it via cli or gui and gparted will only offer to remove the partition - not resize.
View 1 Replies
View Related
Jul 18, 2010
Just installed 11.3 on my computer, however when I connect an external NTFS harddisk I receive an error message. When I open dolphin to connect to an internal NTFS partition I receive the message:
org.freedesktop.Hal.Device.PermissionDeniedByPolicy: org. freedesktop.hal.storage.mount-fixed auth_admin_keep_always <--
Anyone having an idea how I can fix this?
View 9 Replies
View Related