General :: How To Know What RAID Is Servers Running
Oct 24, 2010
i want to create a script which can help me list out all my servers running on what RAID, but i am quite new at this so i do not have much knowledge about this! i am using Veritas Volume Manager for the servers
I was recently requested to try and convert a running system to RAID level 1. I did not succeed in this task. However, I am still interested in accomplishing this task in a test environment.What I did was create a RAID device with a device missing e.g.
Once the RAID was up and running I created the /etc/mdadm.conf file so that it would activate the RAID device on boot. Once the device was created I copied all the data from the running root filesystem to the /dev/md0 device. The only directory that I didn't copy was the /proc directory. When I rebooted the machine I had a kernel panic and the system couldn't find /dev/root. I would greatly appreciate it if someone could provide some information/tips regarding the problem.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have 2 identical disks, /dev/sda and /dev/sdb. I have a raid-2 configuration (/dev/md0) on /dev/sda3 and /dev/sdb3 partitions. On top of /dev/md0, I am running LVM. There are also partitions sda4 and sdb4 following sda3 and sdb3 respectively but the data in there are not important. What I want to do is delete the sda4 and sdb4 partitions and extend sda3 and sdb3 to the end of disk, and grow the md0 and the volume group of course *without* loss of data.
I'm currently using Windows Vista 32-bit on a RAID 1 array; I'm using the RAID provided by my motherboard so it's fakeRAID. Anyway, I'd like to do some C development under Linux but I'm not exactly sure how to go about installing it on a software RAID 1 array without messing up Windows. I'm not sure which Linux distro I'm going to install, so I'm hoping that information isn't important. Would I just resize my Windows partition and put Linux on the newly created partition? Do I have to worry about where Linux will put its bootloader or will it manage that on its own? I didn't mean software RAID, I meant fakeRAID.
We are running IPmonitor to monitor the disk usage on our Linux servers. It does not seem to coincide with what is reported when running df -h. For example on a Red Hat 5.3 server - our IPmonitor shows that 85% is used on the /usr partition, however when I do a df -h on the server it shows that 91% is used. Why there would be a discrepancy? IPmonitor uses SNMP.
How to submit multiple jobs onto a Linux server. The only way I know to submit and run a job on a server is using qsub, and verifying the status of the job using qstat. I usually run my scripts using qsub -cwd so that I can run it on my own directory (instead of having the results sent to a scratch folder).
1. However, assuming qsub/msub are not available, is there another way to do it? What commands can I use instead? 2. I know that some jobs can run in the background, is that an alternative? How do I do it? And would I still be able to check the status of the job or delete it?
I am running apache2 and tornado web servers on the same server with one ip address.
The apache2 listens on port 80. Tornado listens on port 8888. I want to redirect requests from a specific ip port 80 to port 8888. I don't have the ability to change the port request on the device. It wants is looking for a web server on port 80.
Any other web server request should go to the apache.
I tried adding the following to /etc/ufw/before.rules
When I run iptables -L it doesn't appear. I have disabled and enabled ufw with no help.
I have a custom server built with Intel motherboard and RADI 10. The RAID controller is on the motherboard. I'm about to install fedora 11 x86 64 bit on it. Should I be experiencing any issues with anything that any of you have come across? I did a checksum check on the iso files and those were good. Also, someone sent me this link..
[URL]
However, I have a hardware RAID, not software RAID.
I have three WD 1.5 GB harddrives. 2 of them already in a linear RAID also called Concatenated i think. (the same as JBOD). Can i add the third drive to the RAID without losing data? Update "Using mdadm software raid."
I am attempting to run Ubuntu 10.04 Server on RAID 1 and am consistently hitting the same issue when trying to boot the system for the first time (after what seems to be a successful install of the OS). I am creating the RAID 1 using the directions found in the Ubuntu Server Guide. The error I receive when trying to boot the system for the first time is...
Code:
mount: mounting /dev/disk/by-uuid/f35415ee-4c14-4eb1-995f-f19fbcd760c7 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory
I've had this server for a while and I've periodically received similar errors, but it's been worse lately. I'm trying to diagnose and similar posts have left me with 2, maybe 3, ideas about what could be wrong. Here is the setup: Ubuntu Server 10.10 maverick kernel 2.6.35-30-server Sans Digital TR8 8 bay raid - 2 port multipliers to esata outputs 8 1TB WD Caviar Black Syba PCI-e card with Sil3124 chipset - 2 external esata inputs I do not use the RocketRaid 622 card that came w/the enclosure because I had problems with drivers and configuration so I went with the SI chipset. The raid is configured with mdadm, level 10, running ext3 file system:
What's odd is that the raid dropped out and completely locked up the computer multiple times, then marked one of the drives as failed. I did a readd on the drive and it rebuilt witout issue. Ran e2fsck -f to clean up the problems it had when it crashed, but as soon as I do heavy read/writes the errors start showing up again. This is primarily a media server for music and movies, but also for backups and printing. Heavy read/writes are generally transcoding movies and music which is what I was doing when it failed.
How can we know which Raid level is useful for the server E.g Mail, Web, Database server. Also things we can keep in mind before configuring RAID / LVM on live servers.
I am using Fedora 14 64 bit with two 1T drives. There is a Raid 1 for the important information and I use a Raid 0 for cached files that can be generated on the fly. Yesterday my Raid 0 went into read only mode.
Here are a couple of tests:
What caused the Raid 0 drive from /dev/md1 to get removed? Is there a way to get it back to active sync?
I have tried:
But it can not open /dev/sda2: Device or resource busy.
I have a problem installing Ubuntu on an HP Proliant ML115. This server has a 1TB mirrored RAID setup (and 1 250GB drive), problem is, Ubuntu apparently doesnt like it, it detects the RAID but fails to partition the hard drives and the installation cant continue. I tried the guided partitioning using the whole disk (detected RAID array). Here is what syslog gives me:
May 17 22:32:30 main-menu[504]: INFO: Menu item 'disk-detect' selected May 17 22:32:30 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 May 17 22:32:30 net/hw-detect.hotplug: Detected hotpluggable network interface lo May 17 22:32:31 hw-detect: Loading PCMCIA bridge driver module: i82365 May 17 22:32:31 hw-detect: FATAL: Module i82365 not found.
[Code]...
PS: Im very noob regarding this kind of stuff so detailed instructions please, also feel free to give configuration tips
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
I've recently completed a fresh install of 10.04 on a home file server and upgraded the hard drives in my storage array. My PREVIOUS hardware was:
Old version of Ubuntu (I forget which one exactly, but I know I had missed a few upgrade cycles)
3X 500 GB Seagate Baracuda's (for the array) Areca 1220 Hardware RAID controller Intel Core 2 Duo 6600 320 GB Seagate for the boot drive
I was running that hardware for about five years or so and it was rock solid. After the upgrade the hardware specs are:
Ubuntu 10.04 Areca 1220 hardware RAID controller 4x 1000GB Samsung Intel Core 2 Duo 6600 320 GB Seagate for the boot drive.
The fresh install of Ubuntu 10.04 went remarkably well. The drivers for that raid controller are in the kernel, which is great. I was able to access the old array after upgrading Ubuntu. Now I am trying to create a new array with the four 1000 GB drives in a RAID 5. Obviously that gives a maximum storage capacity of 3 TB, greater than the 2 TB threshold that seems to be so important. I've been doing some digging and here is where my questions start:
it appears as though gparted doesn't support file partitions greater than 2 TB, correct?
it also doesn't seem as though parted supports ext3 or ext4, is that correct?
If this is the case, how do I create a partition with ext4 that is greater than 2 TB?
I can see the array volume in gparted (which is a relief) but it lists the size as 2.73 TiB, which I find curious because that is over 2 TB, but not the full capacity of the volume. I can also get to the volume in parted. But I see in the parted documentation that using the makepartfs command is discouraged and instead, one should use the command mkpart to create an empty partition, and then use external tools like mke2fs ( to create the filesystem.
how to proceed from here. What does the community think is the best course of action to create a partition of 3TB in ext4? Then I need to change fdisk to automatically mount the array at every log in, right?
I'm setting up a web server but I have no experience with RAID. I would like to try this configuration if possible:
2 x HDD 500GB RAID1 1 x HDD 20GB (logs and tmp)
The old 20GB drive I would like to use it to store logs and temporally files (mounted in /var/log and /tmp respectively). With this I'm trying to reduce some disk usage in the RAID drives. In my idea, it would be better to write the access/error logs of the web server in a separated drive to the one serving the files which may increase speed... sounds crazy?
One problem is that during the installation, If I set the RAID automatically it will try to use my 20GB HDD as well in the RAID... Does it will work if I set the RAID first (removing the 20GB HDD) and then set the mount points in it after the installation?
I'm in the process of setting up my very first ubuntu server. Using 10.04 and has 2gb ram I am using Ubuntu's software raid (mdadm) It will be a file server, with 4 hard drives (3 in RAID5 and 1 as a spare) All 4 drives have 2 partitions: Partition 1 - 100mb Partition 2 - The remaining drive space
I setup the first 3 drives' partition 1's to be RAID1 with the 4th drive's partition 1 as a spare. This is where I'm mounting "/boot" (it's my understanding that /boot cannot be on a RAID5). I setup the first 3 drives' partition 2's to be RAID5 with the 4th drive's partition 2 as a spare. This its where I'm mounting. I believe so far I'm setup correctly to be able to deal with a drive failure and the system should operate just fine. What I don't know what to do is with the /swap. I want to retain the ability to be able to operate with a drive going down. But I have also read warnings about putting /swap on a raid. How would you setup /swap?
I am trying to build a storage server using software RAID. After getting writing RAID information to what I thought was the drives, I realized the configuration was not going to work. Since that time I have modified the RAID configuration more than once plus this is a project that was started a few months back. Between the time delays and currently written configuration, things are a confusing mess.
What I would like to do is wipe out all software RAID and start all over again. The problem is that I am unable to figure out how to get rid of it. I used DBAN to wipe out all four of my 1 TB drives which took a while. But still, the RAID configuration pulls up when I try to do a new install. Where is the RAID configuration data stored at if not on the HD? How can I scrub that information off so I can start all over again?
I'm using a Intel mac pro running Ubuntu Server 10.4 64bit, and I have it working. Currently, I'm just using Ubuntu using bootcamp, and not using EFI. The OS drive is 250gb which I know is more than enough, but it was what I had free at the time. I added 3 1tb drives to the computer, but not sure how to create a raid with them. I've done some searching, but still haven't been able to get it done successfully.
I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).
My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:
Code: (parted) p Model: Areca ARC-1280-VOL#00 (scsi)[code].....
What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.
I am trying to grow my array to make full use of my drives. I have a raid 6 using mdadm on my home server. The array used to consist of 6 1.5TB drives when it was created. Since then I've been replacing the 1.5TB drives with new 2TB drives. And now I have replaced the last 2 drives.
When I added the first 3 disks I did not use the whole disk for the raid partition but rather made it the same size as the 1.5 partitions. As it turns out this may have been a bad idea. (But it gave me another 3 partitions of 500GB that I turned into 1 disk using mhddfs.)
Now I'm trying to grow the array. I've been testing in a virtual enviroment on how to do it but I cannot find another method than this :
1) fail 1 disk. 2) re-partition the disk with the size of the whole disk. 3) re-add the drive as a spare. 4) start the now degraded array to let it resync. 5) wait quite a while. (aprox 5 hours) 6) start again from step 1 for the other disks. 7) use mdadm with grow command to enlarge the array use resize2fs to fill array to max size
Now although since this is a raid 6, I keep some redundancy but I still worry about degrading the array so manny times and rebuilding the whole thing. I mean I read the thing out 3 or 4 times over doing this.
I have installed ubuntu server LTS 10.4 on a HP MediasmartServer.It has four disks. On the first one I have the original HPFS/NTFFS partition (left untouched), a Linux ext4 partition for / a Linux SWAP partition and the rest of this first disk (sda) is part of a RAID 5 with the three other disks.I now have this first ext4 partion shown as full. Is there an easy way to give it more space without loosing the whole setup?I have 2TB of data on this server and everything is so fine.
I've been running my dedicated server on Ubuntu 10.04 for 300 days non-stop so far (touch wood). I'm planning to purchase a dedicated server to host many large sites. The specs are Sandybridge Xeon with 16 bay x 2TB storage using RAID5 (Adaptec RAID 51645).I don't have experience with RAID, but I read that ext4 can only support filesystem up to 16TB, my plan for the system is to have 32TB of storage. How can I make Ubuntu run this configuration?
Also, can Ubuntu 10.04 recognize the Adaptec card? Going through Adaptec website there is no mention of drivers for Ubuntu (although many other distros are available).
I'm looking at building a small NAS setup for my home to hold all my media (DVD's, pictures, music etc..). I'm not too concerned with backups of it at this point as I have all the data elsewhere as it is, but would like one central spot to access it.
A friend of mine turned me on to Ubuntu to do this with but he's never tried it, so I'm looking to see if anyone else has done this or if there issues with it. I've used linux off and on for several years, but this would be my first Ubuntu install.
I'm looking at either using some older hardware I have sitting around, or something cheap like a small dell server (T310 or something). I'd get 4 2TB drives and just use them in a software raid 5 array. It would be accessed with a few different samba shares I'm thinking.
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.