Server :: Weekly Once Resync Is Starting Itself In RAID1
Jul 25, 2011
how can I stop resyncing permenently & how can I check whether the normal sata HDDs can support RAID before/after buying HDDs. Because on every saturday or sunday resync is starting itself even there is no entry about resncing in crontab. But if I run "cat /proc/mdstat" it is showing RAID1 is perfect. see the below output
I have two 80 GB IDE hard disk. I have create raid1 partition in both drive using [URL] ink. raid is working fine. But i have copy some data on one hard disk (md0) but this data is not autometically copy in second hard disk(md1). I want when data is write on one hard disk, this data autometically write in second hard disk.
I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.
I have ClearOS (CentOS) installed. I have 2 x 2TB SATA HDDs (hda & hdc). At installation time, I configured a RAID 1 (and LVM) between the two HDDs. After a power problem happened, the two HDD were re-syncing and I checked it using: watch cat /proc/mdstat The speed didn't exceed 2100 KB/s I tried the following with no change:
I have setup a cluster.Which is basically a few Virtual Machines running and the applications running in them which are accessible on internet. He has asked me to send him a weekly report of this work. I am sys admin guy who understands ssh,telnet,ftp,tftp,TCP I am not able to understand what should I write in report.Because all the servers are perfectly running and applications are also running on top of them and I am done with this.So basically from my part I do not have any ssh or ftp to write in a report like this.Can some one give me a link if there is some sample report that I should send.I am not able to understand what do I need to Google for the same.
I need address of ftp server that has the latest .iso of Debian testing (weekly build).unfortunately, cdimage.debian.org/cdimage/weekly-builds/ is useless to me as it does NOT support ACTIVE FTP.
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
I have an Intel server, which has it's two SATA HDD's in "Intel Embedded Server RAID Technology 5.4" RAID1 volume. How to proceed with a system image in case two of those SATA HDD's fail at the same time? Should one take the first HDD of RAID1 volume, connect it to another machine and execute:
Code:
# ddrescue /dev/sda1 /media/External/image_of_first_hdd /media/External/log_of_first_hdd * HDD from the problematic RAID1 volume would be recognised as /dev/sda1 behind new machine * /media/External/ is a mount point for large external HDD in the new machine * log_of_first_hdd would be the log file
..and then take the second HDD to another machine and execute:
I am self teaching everything I need to develop a home-based web server (linux/apache/php/mysql/html/css/etc...) It's quite an undertaking, but not beyond my abilities. I thought this question could have gone in either the linux - software or linux - hardware forum, and certainly not in the n00b section, but I figured it's best be put in the linux - server forum, since that's what this is related to.
I have been looking into the software and hardware RAID solutions for linux because I wanted to make sure that the boot drive of the web server I set up is mirrored with transparent disk fail/replace/recovery. I mean, setting up a boot drive for RAID1 sounded perfectly logical to me, and why wouldn't it to anybody else? So, since I knew RAID controllers were expensive, I looked into the native software RAID support in linux. My findings have revealed an issue with software raiding a boot drive in not only linux but windows as well. Apparently, if the primary drive fails (not the mirror), you have no other option but to power down the system to properly replace the failed disk, reboot, play some config crap, resync the drive, do some more config crap, reboot again, and -hopefully- it'll be ok. Well, that procedure is simply out of the question since the idea behind RAID is to transparently proceed as if nothing happened.
I'd like to know if it's even possible to RAID1 the boot drive for transparent and automatic fail/hot-swap/recover WITHOUT rebooting the system and with no intervention on my part other then replacing the drive whether it be a software raid or hardware raid solution. Eventually, what I'd like to do for a drive configuration is have 3 RAID volumes on the server configured like so:
RAID volume 1 = boot drive w/ webserver installed RAID volume 2 = database files RAID volume 3 = flatfile storage Each raid volume will be a RAID1 of a 1TB drive (total = 6 x 1TB drives)
I've seen a lot of people having failure issues with the software RAID in these forums. Is this more common than not? I'm certainly not opposed to buying a hardware RAID solution as long as they're reliable and provide transparent/automatic recovery. So what's the best way to RAID1 the boot drive for transparent/automatic failover?
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3 debian squeeze x64 installed: root@xen2:~# uname -a Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
I'm planning to setup a ubuntu file server. I'll be using the 8.04LTS server edition. the system is probably going to have 4 harddrives. at the end they shall form an software RAID10 system. I'd like to use lvm at some point in order to able to make snapshots as I read through some mdadm and lvm docu/tutorials I could think of two possible setups:
in both cases:
small raid1 of 2 partitions that will form /boot small raid1 of 2 different partitions as swap space
1. the rest will form 2 large raid1, which will be combined to a single virtual drive via lvm
2. make a raid10 out of the rest with mdadm, then make a lvm volume group just consisting of the 1 virtual raid0 device are there pros/cons for either solution? is lvm as powerfull as mdadm in striping? will the first solution produce less overhead?
I installed a raid1 on a debian lenny box with only 1 drive "--raid-devices=1" because I didn't have the other drive yet. When I got the other drive, I used "mdadm --grow /dev/md0 --raid-devices=2" and "mdadm --manage /dev/md0 --add /dev/sdb1" The original drive is sda1. I watched /proc/mdstat until it was completely synced, and after a reboot, the system will not reassamble the raid. It fails with "mdadm: no devices found for /dev/md0" This is where root is, therefore, I get nowhere. From a rescue cd I can disable the other drive and shrink back down to 1 device and it boots fine.
Not sure on what is going on here. The server is RAID1 through hardware RAID. It was running an unusual high load so I rebooted it. Now it won't boot up. I am getting these errors after the CentOS boot screen:sda: Current [descriptor]: sense key: Medium ErrorAdd.Sense: Address mark not found for data field
end_request: I/O error, dev sda, sector 3040555357 device-mapper: raid1: A read failure occurred on a mirror device. device-mapper: raid1: All sides of mirror have failed.
I am a college student (compSci) that moves around a lot with a laptop. I back it up often, but I dont want a simple usb hd that can be stolen from my dorm and/or damaged (its already been damaged). I am making a file server with RAID 1 that will sit at my parents house for safer backups. I just need a few pointers, I have never experimented with RAID before.
Software: Fedora 14 - Software RAID 1. I will only have ssh running on a port other than 22, behind a router, with keyed entry only so I can remotely backup my stuff.
Hardware: A new(ish) P4 mobo with two (2TB) hd's (for RAID 1) and one small hd for the OS.
My questions: 1) Should I have the OS installed on a separate drive or on the two RAID drives? I am using software RAID, not hardware, so I assume I need two external drives for the RAID.
2) Should I be using more then two hd's for a RAID 1 array?
3) How can I encrypt the RAID drives? As I said before, I have no experience with RAID.
4) If the OS drive fails, can I just grab a new hd and install Fedora on it to get the data off my RAID array? Or do I need to image the Fedora drive every so often?
5) If one of the RAID drives fail, is there some sort of daemon that can tell me? I will not be at my house physically, so I will not be able to hear scratching platters :P. Also, because the size of a single disk in the array is 2 TB, can I just go out and get any kind of 2 TB drive to replace the failed one?
6) If the MoBo fails, can I just pop in a new one (of any kind) and continue using my same array?
I am rebuilding a bunch of servers and want to do it right. They are Dell R200s and R300s with on-board LSI SAS1068E SCSI controllers with 2 SATA drives. The only RAID level supported on these cards is RAID 1. So, to the server, we have 148GB of space to deal with. They currently run 32-bit Ubuntu 8.10; I will be installing x64 Ubuntu 10.04.
I have always seen that it is best practice to partition in such a way that /boot, /var/log, /temp, and /home for example are separated out from /. Usually this is on a RAID5 or higher box. Is there any benefit to doing that sort of thing on a RAID1 box? I realize that this is in some ways a matter of opinion, but I would like the opinion of folks with experience. I'm pretty new to Linux in general.
The main services running on these boxes are Apache2, Tomcat6, MySQL, and Java.
So, I figure it's high time I learn to use this utility (or something else, doesn't matter to me). I want to backup a photo directory from an internal to external drive.
When I explore in Ubuntu, it names my external drive SimpleDrive . . . the destination directory on that drive I call Backup PhotoBank 31811. I think I had a leading zero that Ubuntu drops for the date (031811).My source drive is referred to by Ubuntu as 320 GB Filesystem, and the source directory on that drive is referred to as PhotoBank.
When I use these designations on the command line, rsync returns error messages stating that no such file or drive exists (note, I have not referenced any files within the directories).If I click on properties for the 320 GB Filesystem, there is another name designated, a very long combination of numbers and letters.I've tried using that name, but still get the same error messages from rsync.
Is there a front end for this program that is more graphical in nature? I don't mind using the command line, as I know that once I get this sorted out, it should be no problem, but I'm not too proud to resort to the ultimate in simplicity.
I am seeking to copy the files from the internal one time, then, I plan to delete the files from the internal, load it back up, copy the new files to the external, and so forth. I'm not really looking to sync the two drives.
I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:
12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen 12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0 12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)
I have a low power machine I use as an SFTP server.It currently contains two raid 1 arrays, and I am working on adding a third. However, I'm having a bit more trouble with this array than I did the prior arrays.My suspicion is that I have a bad drive, I am just not sure how to confirm it.I have successfully formatted both drives with EXT3 and performed disk checks on both which did not indicate a problem.
I can see it progressing in the blocks count, but it's incredibly slow. In the course of 5 minutes it progressed from 1024/1953511936 to 1088/1953511936.Checking top, not even 10% of my CPU is being used. Are there any other performance items I could check that could be affecting this?
I have 2 servers both with software raid 5 for the main storage partitions. 6 x 1.5 Tbyte drives. I observe that the these arrays seem to resync quite often. cat /proc/mdstat or smartctl -a does not show any problems with the drives. Is this normal. What is the default period between resyn. What would cause a resyn of the array. The standard resync time for the array is over 900 minutes which can become an inconvience for the server users if it occurs on a week day as it degrades the performance.
Compared to my laptop notebook with a HD of 5400rpm, the write performance of raid1 on an ubuntu lucid server is unacceptable. In the begining, I installed ubuntu 9.04 server(alternate) using raid1 with two WD 1TB HDs of 7200rpm(Green Power) and then performed dist upgrade to 9.10 and then to 10.04.
I guess the write performance initially was reasonable since the installation and data migration(copy from another computer over LAN) didn't take too much time. However, after upgrading the server to 9.10 or so, I found large file upload through samba or ftp tends to block and time out. It is of no use whether to change the daemon or the client program so that I tried to test the read/write performance on the server to figure out the situation.
To my surprise, using strace I found even a simple program like cp would easily get blocked eventually in a write() system call for decades of seconds. Hence, I perform another disk writing test using dd for data size ranging from 50MB to 1GB. Performance test commands are listed as follows:
if the data to write is equal or fewer than 150MB, the command returns immediately at very hight speed but the raid disks starts to sync and busy so that the terminal prompt seems to freeze. I think this behavior is normal under the raid1 configuration, isn't it?
But when the data size is equal to 200MB, the test command blocks for seconds and the write speed is measured at about 16.6MB/s. Of course, the raid disk still starts to sync and busy afterwards. Next, I test writing with data of size 1GB. The command blocks so long for about 770 seconds(<2MB/s) while the same test runs for only 17.49 seconds(60MB/s) on my laptop.
I also burn a Lucid LiveCD to boot the server and mount the raid device to run the test again but the results remain similar. Does that means even I re-install the system on the raid, the problem never disappears?
PS: the disks run under the mode of UDMA6 without change.
On the ReadyNAS Duo, a RAID resync synchronizes the entire contents of one disk to another. It's a slow operation. So it's not something to be done lightly. By default, it is done whenever the unit is shut down uncleanly (generally the result of a hang or power interruption). It's something that can be disabled, though, such that a resync would be done only when manually requested. The authors of the software RAID driver in Linux (called "md") came up with a nice solution to the problem of slow RAID resyncs: the md driver allows you to assign a write intent bitmap to an md device, and where it's stored is configurable (the default is to store it on the md device in question, in the superblock). So when the array goes down unexpectedly (e.g., in a power outage), only the blocks that the md device was going to write will actually be resynced.
It's sort of like a journal for the md device itself. Now, the ReadyNAS Duo doesn't seem to use Linux md at all for implementing the RAID volume, but it does have an option to disable automatic resync in the event of an improper shutdown. And that leads to my question: how necessary is it to do a resync in such an event? If the Duo uses the same method as the Linux md driver, then obviously such a resync would rarely be necessary. But if it doesn't, then it would be the equivalent of using the md driver without a write-intent bitmap, and a resync would almost certainly be necessary in the event of an improper shutdown. Does the Duo (and maybe other units) use the equivalent of a write intent bitmap for RAID write operations?
Yesterday I installed a new server with a large partition for my XEN images. This partition is a about 930GB. The installation tooks ages and after he finished I was finding out why that is. The SoftRAID1 I configured is rebuilding the large partition.
We have had a hardisk crash in our RAID1 webhosting server running CentOS5 and Plesk. We first realized something was wrong when our main site didn't load but showed MySQL errors. We then found out that the system was in read-only state. Something that also happened the day before yesterday, but we could fix with a FSCK. Then the system worked well til around 18 hours later when it crashed with the same sympoms. So, we rebooted the server and wanted to do a filesystem check again. But the HDD wouldnt even load. It was gone. Unfortunatelly it was not realized that the second disk in the system was also not working any more for some time now. Fortunatelly we had our main site backed up externally though. So we could re-install a fresh box and mounted the two drives to the system. We checked the harddisk. One is practically empty (the older one), the other has almost only files in 'lost + found' but these are all "numbered", no real filenames or so.
I have three raid volumes, md0, md1, and md2. The weekly raid-check cron always causes md0 and md2 to get checked but never md1.
/etc/cron.weekly/99-raid-check
Volumes md0 and md1 are on the same 8 disks. Volume md0 is a small raid1 across all 8 disks for boot and md1 is a raid5 that occupies the rest of those 8 disks. Volume md2 is a raid5 on a separate set of 6 drives. When I check /var/log/messages* I see that every Sunday md0 and md2 get checked but md1 is never checked. However, when I run 99-raid-check manually, all three volumes get checked. There is clearly something different between how crond runs that script and how I run it, but I can't imagine what that would be.
I'm using AutoMySQLBackup [URL] script to backup mysql db from several server.I need little help from you who is guru in scripting. I would like to take weekly backup after every 1 week and not every week. i.e. on every 14 days. This is the weekly backup portion of the script:
Code:
# Weekly Backup if [ ${DNOW} = ${DOWEEKLY} ]; then ${ECHO} Weekly Backup of Database ( ${DB} ) ${ECHO}
I currently have a problem in running rsync on 64 bit Debian Jessie (although the problem also occurred with 64 bit Debian Wheezy)I am trying to use rsync to archive my home directory (which is on a hard disk) to a USB memory stick. The home directory is about 18GB in size and the memory stick has 32GB.
Unfortunately, rsync hangs after copying a certain number of files and the process eventually has to be killed. Rsync was rerun but hung again at about the same point as before.This has now happened several times. Each time the hang occurs at about the same point.Use of strace after the hang shows that rsync appears to be processing a pdf file at the time, although not always the same pdf file.I originally had the rsync hang problem on a PC which ran 64 bit Wheezy and which used a USB 2.0 port.
I now am running rsync on another PC, which runs 64 bit Jessie and which uses a USB 3.0 port..I have also tried three different USB sticks, two from one manufacturer and the third from another manufacturer.All give similar rsync hangs.