CentOS 5 Server :: Soft-Raid Didn't Start After Boot?
Apr 22, 2010
I've a database-server (IBM x3650 M2) with about 3 TB Data on SAN (Hitachi) with lvm top of softraid (RAID1) based on multipath (2 SAN-boxes in different buildings). After booting the server, multipath starts, but no md starts the mirror. The same configuration with SLES 10 works.
I would like to build a NAS from PC (D510MO) running Debian. I have two HDDs (one 3.5 1T and one 2.5 500G). On 3.5 HDD I have already two partitions 100M+40G dedicated for Win7-64. Now, I want to install Debian (second OS) on this PC and to have some kind of soft RAID or disk mirror of 500G space. I am planning to create a third partition on 3.5 HDD of 500G (identical as 2.5 HDD size) in order to have a mirror 500G space.
Please send my some suggestions on where I have to install Debian; on 500G 2.5HDD or 500G 3.5HDD!Will Debian boot from both HDDs 3.5 or 2.5 after I create the mirror? What Linux soft I have to use for mirroring (mdadm)?
I installed Ubuntu today and I have to say it's awesome, shortly after trying it I decided to remove XP, I had some problems I could solve myself but right now my "biggest" one is that I can't uninstall empathy, it may sound stupid but I totally need to take it out, I installed Pidgin and it works like a charm btw: I removed empathy from the ub soft center but it didn't change anything
I had a lot of questions but I resolved all of them in the middle of this post (= I have another question but it isn't as important, does anyone know how to set up the cube? I'm sure I'll end up turning it off but I want to try it
I have problems setting up a (2 server) cluster. I get the following messages in syslog:
Oct 14 13:02:21 korzel ccsd[7823]: Starting ccsd 2.0.115: Oct 14 13:02:21 korzel ccsd[7823]: Built: Sep 3 2009 23:26:21 Oct 14 13:02:21 korzel ccsd[7823]: Copyright (C) Red Hat, Inc. 2004 All rights reserved. Oct 14 13:02:21 korzel ccsd[7823]: cluster.conf (cluster name = cl_vpanel, version = 1) found.
[code]....
Tried with the "basic" hosts file as well, eg with only the localhost lines in it. Both server run centos 5.3, up-to-date.There is no real error anywhere which might explain WHY aisexec daemon won't start. When i start openais manually (/etc/init.d/openais start) it starts without any errors.
I try to start clamd service since I found "PROCS CRITICAL: 0 processes with command name 'clamd'" from nagios. so I check at its log file at "tail /home/clamav/logs/clamd.log" it said that log file exceeding maximum limit so I try to rotate log by
I am trying to make a NAS solution for our club, which should contain different data for download. I have been looking at freeNAS, openfiler and other solutions but they don't offer quite what i want.I want data security, but trying to avoid software and hardware raid.Windows Home Server has a function, where it copies the data from the 'main' storage disk and over to the 'backup' data storage drive automatically.
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I was able to do on Debian and Ubuntu Servers X applications running on remote servers where I was able to login via SSH. Tried the same with CentOS did not worked. in /etc/ssh/sshd_config
I install a Bind 9 with chroot in Centos 5, but the issue is the Reverse Name Resolution Zone File didn't create by default like other zone files, so i look into /var/named directory i don't find the reverse name resolution zone file even if i add this zone on named.conf
zone "1.168.192.in-addr.arpa" IN { type master; file "1.168.192.testsip.com.zone"; allow-update { key "rndckey"; }; notify yes; };
I have a strange issue with my RAID5 array - it worked fine for a month, a couple days ago it didn't start on boot with mdadm reporting "Input/Output error" - I didn't panic restarted my computer, same error. Then opened a Disk utility and it reported State: Not running, partially assembled - don't know why, I've pressed Stop RAID Array and started it again, voila - it reported State: Running - I've checked components list and there was nothing wrong with it. So I run Check Array utility, waited almost 3 hours to finish it and it worked since than, till today's morning - I've started my computer, and here we go, same error.
See screenshots:
This is an initial state just after computer startup:
This is after I stop and start RAID5:
This is a components list:
I can see nothing wrong there yet not sure why mdadm fails on boot. I do not really like the windows solution I guess, when I check my array again, it will work fine again, but it then can fail in the same way without known reason.
I'm trying to setup RAID 1 on a CentOS 5 server for a zimbra email server.I get a partion schema error. Can I do this?The server is a HP Proliant ML150 G3 server with two 80GB HDD.
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
I've started this new topic because my original one was a bit old and my circumstances have changed a bit. I'm trying to install Centos on a system with a adaptec 5805 hardware raid card, real raid not faked, and am having problems when a large, over 2tb, sized array is partitioned. I have two raids on the card, one is a raid 1 with 2 500gb drives that I am installing the OS to and the other is 4 1tb drives in a raid 5 so I wind up with around a 2.7tb array. I was having all kinds of problems when I left both arrays in place during the install but a firmware upgrade on both the motherboard and the raid card seemed to improve it to the point where as long as I do not tell the install to format the 2.7tb array in any way I can get the install finished and it will boot up and work. If I touch the array in any way during the install I get a system hang after a question about booting from the cdrom drive.
My final test was to do a fresh install and remove and re add both arrays so they were clean and leave the 2.7tb array completely out of the install process. Then after install I set up a gpt partition using parted from the command line, I didn't even put a file system on it. When I rebooted I got the same hang up after a cdrom boot prompt. I then used the install dvd to boot into rescue mode and wiped the partition on the 2.7tb array using
dd if=/dev/zero of=/dev/sdb bs=1024 count=1024
which made the drive out to be empty. Now the system boots up fine again. The only thing I can't tell, since it goes by too quickly, is if there is any kind of cdrom prompt like when the array is partitioned. I'm wondering if when the array is partitioned as a gpt is it possible it looks like a cdrom and thus stopping the boot somehow? Is there any other way of partitioning a 2.7tb array so I can see if taking gpt out of the equation will fix the problem??
I'm configuring a new Centos 5.5 server in replacement of an old W2K server.The topology of our network is simple : one file/dhcp/dns relay server and workstations (PC's and some MAC's) plus network printers and scanners.All the workstations have dynamic IP addresses (easier because a lot of 'dynamic' changes : new persons with their own laptop, ...) and the server and printers/scanners have fixed IP addresses.I edited the dhcpd.conf (see here underneath), I have the file dhcpd.leases but it doesn't start !
When I set up my current workstation (now at CentOS 5.6) I connected the two SATA drives in a RAID 1 configuration. Later, I realized I had a spare EIDE drive , so installed it, partitioned, and added it as a hot spare using "mdadm -a". So, now I have three disk drives doing the work of one.I updated /etc/mdadm.conf using the output from mdadm --examine --scan, and the resulting lines are:ARRAY /dev/md0 level=raid1 num-devices=2 UUID=1f00f847:088f27d2:dbdfa7d3:1daa2f4e spares=1ARRAY /dev/md1 level=raid1 num-devices=2 UUID=f500b9e9:890132f0:52434d8d:57e9fbfc spares=1I also updated the /boot/grub/device.map file to read:
I have centos 5.3. I have two h/d 1st h/d used as primary and backup of reqd things of first h/d is copied into 2nd h/d. I want to configure mirroring in this server how to configure this, raid 1 is ok or not.
I looking to setup a CentOS server with RAID 5 i was wondering what the best way to set it up and How with the ability to add more HDD to the RAID system later on if needed?
I have a pre-existing setup with Windows XP Professional and CentOS5.5 on a dual boot setup with the Linux drive setup as the primary drive hosting the grub menu.
I am replacing these machines with new updated ones and they have windows setup on a RAID0. I think it would be easiest to follow my previous setup and move the RAID to secondary SATA ports and put the linux drive on the primary SATA port, or should I just change the boot order in the BIOS to have the secondary linux drive boot first?
can I move a RAID setup to secondary controller ports without breaking the RAID?
I cannot boot into Ubuntu at all. I have two kernels installed, 2.6.35-28 and 2.6.35-30. The first thing that happened today was that I wasn't able to boot into the latter. I was shown the (in)famous "BUG: soft lockup - CPU#0 stuck for 61s". At this point I could still boot into the 2.6.35-28 kernel. But after shutting down and starting again an hour later, I got the same message when trying to boot into 2.6.35-28. I have tried leaving out the boot options "splash" and "quiet" on both kernels and also adding in "noapic". No combination helps. Needless to say, booting into recovery mode doesn't work either. Up until today, I have been able to boot into both kernels with no problems.
Tried to run Centos 5.5 in Dual Boot Configuration with Windows 7. Windows Y installed without issues within minutes (amazing performance with the SSD's). Installed Centos 5.4 several things that wouldn't work right:
- wouldn't recognize the NTFS partitions. So I decided just to install Centos on the box. Completed installation and rebooted but it wouldn't boot up after the installation. Even in NON RAID configuration it would not boot of the SSD. Replaced SSD's with 2 Seagate Barracuda's 1TB in RAID 0 configuration and all went well.
I'm working in a little company and 2 weeks ago one of our server had a hard disk failure (yes it was a seagate 11) and after passed two days without sleep trying to recover everything (and we did it!!) we took the decision now to use in some of our server a raid sw, so if one HD fail we can continue with our system without losing nothing. Yes I know normally you have to take all the precautions before so this things never happen, but you know I thing if it never arrives you, you always think than you're lucky and it's never going to happen to you but one day you discover reality.
So now this server is working with a Centos and the default HD partitions one boot partition and the LVM. I'm reading everything I'm finding about raid sw and lvm but I don't find if it's possible to create now with the system working a raid sw without having to reinstall all the system. Is it possible to do it ? If not what are my options to make a system backup before reinstalling everything?
i am currently trying to install vsFTP onto my new linux server and btw i just started using linux today this is my first time using linux so i got the ftp installed good it got downloaded and everything then i went to open a port for my server for vsFTP i used this comand to open it "-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT" then i closed it by pressing ESC then :wq! and it brought me back to my comand line again so now when i try to start the ip table thing with the comand "service iptables start" then when i execute that comand putty respondes with this "Applying iptables firewall rules: iptables-restore: line 1 failed [FAILED]"
I install 11.2 on AMD dual core 4200+ and 2G ramafter I enabled some effects on configure Desktop effects and when I pressed apply key the computer hang and when I restart it, it hang before login screen
In order to try debian 6, I started by using debian 6 live, I burnt the following file ((debian-live-6.0.0-i386-kde-desktop.img ) as iso via cdbernerxp (from windosw xp), I tried to use with my old pc pentium 4, 512 mb ram,120 gb of hard dsk space, but it didn't start, it just stuck at menu (live, failsafe etc...).
I need to mount my raid array on CentOS 5.2 samba server.
Here are my hardware specs: Motherboard: Tyan S2510 LE dual PIII CPU's: Intel PIII 850ghz socket 370 Memory: 4 gig Crucial 133 ECC SDRAM OS: 2 x'x IBM Travelstar 6.4 gig 2.5 hard drives, (low heat/noise) Storage: 4 x's Seagate 500 gig IDE 7200 rpm RAID controller: 3Ware 7500-12 controller, (RAID 5) (66 mhz PCI bus) NIC: 3COM 3C996B-T gigabit NIC, (66 mhz PCI bus)
I have the 2 IBM's set as RAID 1, (mirror) and the 4 Seagates as RAID 5, (1.5 TB) I have installed the OS with minor problems, (motherboard doesn't like the 2.6.18-128.1.14.el5 kernel, removed it from my grub.conf).
My problem is mounting the RAID array. I have done the following: formatted with fdisk; fdisk /dev/sdb Then formatted with the following command; mkfs.ext3 -m 0 /dev/sdb
The hard drive was formatted with the ext3 files system, but I have mounted it as an ext2 file system as I don't want 'journaling' to occur. I then edited my /ect/fstab like this: .....
Then: mount -a When I go into my "home" directory and type ls, I get the following: [root@hydra home]# ls -l total 24 drwx------ 2 zog zog 4096 Jun 23 15:50 zog lrwxrwxrwx 1 root root 6 Jun 23 15:46 home -> /home/ drwxrwxrwx 2 root root 16384 Jun 23 15:34 lost+found drwxr-xr-x 2 root root 4096 Jun 23 17:18 tmp Why my home directory is showing under home?