Ubuntu :: Lucid Upgrade Killed Raid5 Md Array?

Jun 7, 2010

I'm a bit sick to my stomach right now. I had a raid5 array (5x1.5TB drives) and I upgraded to lucid and now the array no longer works. Initially, on boot, it would try to mount it from fstab and that failed consistently as it wasn't assembling it.

then I tried to assemble it by hand (--scan) and that seemed to cause it to mount degraded (it seems md in the process tried to use on of the disks for something else!), but when I look at its partition table, it's empty. pretty pissed at the moment (somewhat at myself, didn't really need to upgrade), any ideas what went wrong?

View 2 Replies


ADVERTISEMENT

Ubuntu :: Upgrade To Natty Fails To Boot (RAID5 Array)

May 20, 2011

My box has a raid5 array (mdadm) with everything in it (/boot and /) but swap that is actually spread across the 4 drives. I had ubuntu 10.10 installed (amd64) with grub1, when I upgraded to natty (11.04) it automatically installed grub2. Well boot fails, it always goes to grub rescue no matter what happens. I've installed and reinstalled grub2, and boot always fails with:

"error: file not found".

In grub rescue I can see that md0 is actually available, an "ls" to (md0)/boot succeeds but the strange thing is that an "ls" to (md0)/boot/grub prints nonsense, as does an "ls" to (md0)/boot/usr/lib/grub/i386-pc/. When I try to load the required modules for boot (linux raid etc) in grub I also always get a "file not found error" (I fsck'd md0, which says everything's fine). I have installed the latest version of grub2 and executed grub-install in all four drives.

View 6 Replies View Related

General :: Raid - Recover Software RAID5 Array After Server Upgrade?

Jun 7, 2011

I recently upgraded a server from Fedora 6 to Fedora 14. In addition to the main hard drive where the OS is installed, I have 3 1TB hard drives configured for RAID5 (via software). After the upgrade, I noticed one of the hard drives had been removed from the raid array. I tried to add it back with mdadm --add, but it just put it in as a spare. I figured I'd get back to it later.Then, when performing a reboot, the system could not mount the raid array at all. I removed it from the fstab so I could boot the system, and now I'm trying to get the raid array back up.

I ran the following:mdadm --create /dev/md0 --assume-clean --level=5 --chunk=64 --raid-devices=3 missing /dev/sdc1 /dev/sdd1I know my chunk size is 64k, and "missing" is for the drive that got kicked out of the array (/dev/sdb1).That seemed to work, and mdadm reports that the array is running "clean, degraded" with the missing drive.However, I can't mount the raid array. When I try:mount -t ext3 /dev/md0 /mnt/fooI get:

mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try

[code]....

View 1 Replies View Related

General :: Convert Full-disk RAID5 Array To Partition-based Array?

Dec 23, 2010

I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).

Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?

View 1 Replies View Related

Ubuntu :: Can't Add A New Disk To A RAID5 Array

Jun 10, 2011

I am trying to build a new array after adjusting TLER on my disks, which permanently changed some of the drives sizes. I am not sure if the following inconsistencies are related to the newly mismatched drive sizes.

Using:

Code:
mdadm --create --auto=md --verbose --chunk=64 --level=5 --raid-devices=4 /dev/md1 /dev/sdd /dev/sde /dev/sdf /dev/sdg
Nets me (build-time was two full days):

[Code]....

On a side note, since I'm recreating my array from scratch, I was wondering if anyone here knows of any optimized settings I could use. I've got 3Tb of data to transfer, so lots of test material.

These are Western Digital First Generation 2TB Green Drives (WD20EADS-00R6B0) with WDidle3 fix applied & TLER=ON. These are pre Advanced Format (aka not 4K).

Code:
mkfs.ext4 -E stripe-width=48,stride=16 /dev/md1

View 9 Replies View Related

Ubuntu :: Extend Raid5 Array With One Disk

Mar 6, 2011

I wanted to extend my raid array with one disk, but I made a major error. I forgot partition the new disk to utilize the full 640GB. I used the following commands to extend the array:

Code:
mdadm --add /dev/md0 /dev/sdf
mdadm --grow --raid-devices=6 /dev/md0
xfs_growfs /dev/md0

After noticing that something was wrong I used these commands to remove the new disk:

[Code]....

How can I repair this situation? Before starting this adventure I made a back-up of everything that was stored in the raid array.

View 1 Replies View Related

Ubuntu :: Reshaping RAID5 Array To RAID6?

May 30, 2011

I am running lucid and have a 4+1(spare) RAID5 array made up of 1TB disks. I upgraded my mdadm to version 3.1.4 and then performed the following operation:

$sudo mdadm --grow /dev/md3 --level=6 --read-device=5 --backup-file=/var/lib/mysql/md3backup

I have a 500GB drive mounted at /var/lib/mysql which is mostly empty and not part of any RAID array.The reshaping started and everything looked OK. The access lights on the 5 drives were all coming on at the same time on a regular basis. The status from /proc/mdstat showed the array being reshaped to RAID6, albeit slowly. The status showed an average speed of 4000KB and an estimated completion time of 4000 minutes. This all seemed reasonable. This was performed in late afternoon.

The next morning I checked the status and the average speed was down to 300->400KB and the estimated time to complete was 40,000 minutes. When I look at the drive lights, I have one drive whose access light is on solid and the other four drives come on intermittently. Running iotop doesn't show anything useful. mdadm and kjournal show up occasionally. The same is true for top (running on an i5 2500K Intel processor). Here is the output of cat /proc/mdstat:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sde3[4](S) sda3[3] sdc3[1] sdd3[2] sdb3[0]
987904 blocks [4/4] [UUUU]

[code]....

My biggest concern is keeping this system running for 20+ days without any hiccups.

View 1 Replies View Related

Ubuntu :: Mdadm RAID5 Array Wont Start?

Jul 27, 2010

after a failed upgrade from 9.10 to 10.04 I had to format my computer and do a clean install of 10.04, and now my mdadm raid5 array wont start.my array is called "The Library", and i believe the space between "The" and "Library" is causing the command disk utility uses to start the array to fail.The exact error isAn error occurred while performing an operation on "The Library" (RAID-5 Array): The operation failed

Error assembling array: mdadm exited with exit code 1: mdadm: unrecognised word on ARRAY line: Library
mdadm: unrecognised word on ARRAY line: Library

[code]....

View 1 Replies View Related

Ubuntu :: Rebuilding RAID5 Array From Failed Appliance?

Sep 16, 2010

This isn't exactly Ubuntu specific, but I do plan on using Ubuntu to try to recover this array. I've been using a Freedom9 freestor 4020 for the past few years and other than it totally blowing up last week it's been pretty good. I was on vacation for almost a month and a few days after I returned my NAS (freestor 4020) started acting up. I tried a few power cycles, but was dismayed to see that I could not log in via browser or SSH (SMB shares were no accessible either). A drive failure light is supposed to illuminate if a disk fails, but no dice.

I plugged all 4 drives from the NAS into an Ubutnu 9.04 Desktop system and one started throwing out all kinds of errors. Thinking that it would be a simple rebuild, I went to my local computer shop and picked up another 500GB drive (same manufacturer/part #), replaced the failed drive, and powered up the NAS again... Nothing. I left it for 12 hours then powered it down, plugged the new drive into my linux box again to see if it rebuilt... the drive was a virgin. What gives me hope that I can still recover the data is Ubuntu sees "RAID components" on the drives (through disk manager and parted), and gives me the option of initializing the array.

My plan of attack is to plug all of the drives back into my Ubuntu box, initialize the RAID array via LVM, and pretty much hope for the best. The data is not uber critical, but it would be a pretty big pain in the behind to rip/upload all the software that was on it (ripping hundreds of DVD/CD images is not fun). If my Ubuntu box can make sense of this newly initialized/mounted RAID set... I'll plug in a 2TB external drive, copy the data over, and rebuild the NAS from scratch, then put my data back on (perhaps a different unit, or something running openfiler).

View 2 Replies View Related

Ubuntu Servers :: Rebuild Raid5 Array After Every Restart?

Jul 8, 2011

I have software raid 5 array, each time I reboot my server, I have to rebuild array again. Rebuilding array takes too long. I am using ubuntu server 10.10.

View 7 Replies View Related

General :: RAID5 Array Intermittently Rebuilding?

Jul 17, 2009

I have a 9x320G RAID5 array that I am migrating over to a 3x1.5T RAID5 array.Intermittently, a drive would drop out of the older array and it would automatically start rebuilding. I thought it was a bad cable or controller somewhere, so when I bought the three new drives, I bought a new controller for them all, too. I'm running both arrays side by side until I'm happy the new hardware is stable (one drive was DOA). Then I noticed one morning that both arrays were rebuilding themselves. This was in /var/log/messages:

Quote: Jul 5 00:30:19 mnemosyne -- MARK --
Jul 5 00:50:19 mnemosyne -- MARK --
Jul 5 01:06:02 mnemosyne kernel: md: syncing RAID array md0

[code]....

View 4 Replies View Related

Hardware :: Can't Get A Drive From A Former RAID5 Array To Format?

Jun 15, 2011

I'm a bit at a loss on this one. I couldn't get a drive from a former RAID5 array to format. I did a dd to write zero's to the drive and attempted to fsck only to be stopped every time with the error: Couldn't find ext2 superblock, trying backup blocks.. fsck.ext3: Bad magic number in super-block while trying to open /dev/sda1

Smartctl shows no problems with the drive (a Seagate 750GB), but I haven't removed it and thrown it in a windows machine to do seagates proprietary drive diagnostics yet. Running Centos5.6 .I've never had this problem before. The drive is not mounted and the old md device has been removed as far as I can tell. It could still be attempting to assemble the RAID5 with the 1 drive, but I didn't see it attempt to do so.

[Code]...

View 3 Replies View Related

Server :: Recovering Software RAID5 Array?

Oct 29, 2010

I've had software RAID 5 arrays for a while now, so they were set up before a RAID array could be partitioned. I had two separate RAID 5 arrays on the same set of drives. One was for / and the other for /home. I moved the / to an SSD and figured I'd expand the other RAID array by failing a drive, repartitioning it then adding it back in. After repeating for the remaining drives, I could then expand the RAID array to use the full size of the drives.

Partway through the second drive being added back in, the RAID array stopped with a kernel error. The drive I was adding and another drive both showed as failed. I couldn't restart the array so I copied the failed drive (Seagate's SeaTools did show it as faulty, but without SMART being tripped) to a new one and tried again. dd_rescue reported the drive copied correctly but I still couldn't restart the array.

So I tried the old standby of recreating the array. This allows me to start it but the ext3 file system won't mount. So I then tried my script (listed in another thread) to try every combination of drives to assemble the array and mount the file system. Still no luck.

View 2 Replies View Related

Software :: Raid5 Array Not Ready For Mounting

Mar 19, 2011

Yesterday I created a raid5 array /dev/md0 consisting of 5 harddisks, named sda thru sde on the time of creation.After that I stored some data into the arry without any difficulties, then shutdown the computer.Early this morning when starting the computer I got a message that /dev/md0 was not ready to be mounted.So I checked the raid array and discovered that the enumerator had been messing with the harddisks.
Harddisk sda was now sdc etc. etc.After I rebooted, the harddisks got the original names again: sda was sda again.When I mounted the array no problems occurred.So, it seems that the order in which the harddisks are enumerated influences the availability of the raid array. Is there a way to avoid this kind of problems with a raid array?

View 4 Replies View Related

CentOS 5 :: 5.3 And LSI Megaraid 8204ELP - Won't See Raid5 Array

Jun 26, 2009

I have looked thru the forums and I am not sure if LSI 8204ELP definitely works with Centos 5.3 or 5.2 or 5.1 or not. Can anyone who has had a positive experience with this hardware combination give some feedback etc. mobo is supermicro c2sbx [URL]

View 2 Replies View Related

CentOS 5 :: Increase Space On RAID5 Array?

Sep 12, 2009

Ok, as the title indicates I have a RAID5 array with 4 500GB SATA drives. This is the only drive configuration on the system (i.e. the OS also resides in the RAID array). I'm running CentOS 5 and need to know how to go about increacing the space in the RAID array by replacing the drives with 4 1TB drives.

View 3 Replies View Related

Ubuntu Servers :: 10.04 LTS Degraded Software RAID5 Array With LVM Won't Boot?

Jul 5, 2010

I also get sent to a Busybox (initramfs) shell with no text editor and don't know how to copy all the error messages and post them here. If there is a way, let me know. I've typed it out in the meantime:

Code:
md0 : inactive sdxxxx
Attempting to start the RAID in degraded mode...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found

[Code].....

This is with a 3 disk RAID5 array. I turned off the system, pulled out a drive, and started it back up. Fresh install, all I've done so far is apt-get update and upgrade.

View 4 Replies View Related

General :: Restore A Superblock On A RAID5 Software Array?

Aug 10, 2010

I need to restore a superblock on a RAID5 software array. But I'm not sure if I'm meant to restore it from MD0 or a device such as SDA1? From what I read, superblocks are stored on each drive, but I'm not sure if this is changed when a software raid is in use.

View 1 Replies View Related

Software :: Mdadm Shrinking RAID5 Array From 6 To 5 Devices

Feb 15, 2010

I have a problem with my mdadm RAID. I wanted to know if anyone had any experience with shrinking RAID5 arrays. I was growing the array from 5 to 6 devices however the grow got interrupted and it has recovered to 5 drives. The 6th drive is toast and I am unable to re add it to the system. I would like to drive the device listed as "removed". I have tried mdadm /dev/md0 --remove detached and failed with no success. I am running Ubuntu kernel 2.6.28-11 and mdadm is v3.1.1.

Here is output of a "mdadm -D dev/md0"
/dev/md0:
Version : 0.90
Creation Time : Wed Jan 12 00:46:41 2009
Raid Level : raid5
Array Size : 4883812480 (4657.57 GiB 5001.02 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Feb 15 20:25:07 2010
State : active, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K

UUID : 74fa5199:84b88e81:4ae0fbae:92643084
Events : 0.1331010
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 0 3 active sync /dev/sda
4 8 64 4 active sync /dev/sde
5 0 0 5 removed

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[4] sda[3] sdd[2] sdc[1]
4883812480 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
unused devices: <none>

View 4 Replies View Related

CentOS 5 :: Upgrading RAID5 Array With Larger Drives?

Feb 17, 2011

I have a CentOS 5 based Linux system with a 3Ware 9550SU RAID card and four 500GB drives set up in a RAID5 array (3 in the array and 1 hot spare).

I want to 'replace' these drives with four 2TB drives without data loss. My server case has a total of 8 drive bays all hot-swap and all attached to the RAID card, this means I have four empty drive bays on the RAID card.

One thought is to put the four new 2TB drives in the empty drive bays, configure them in a new RAID5 array. Then the question is now to I "mirror" the original RAID5 array over to the new one?

This is just a thought though, I am not sure it will work. In short my question to this forum is how do I accomplish this upgrade?.

View 4 Replies View Related

Fedora Hardware :: Highpoint Rr1740 Existing RAID5 Array On F12?

Mar 23, 2010

I'm currently in the process of getting my server moved over from a Server 2003 machine to one based on Fedora 12. My issue is that I have an existing RAID5 array on a rr1740 card. When I install this in the new system each individual drive show up as sdc sdd etc not as one volume. I have tried installing the highpoint driver but I get an error that sata_mv cannon't be unloaded. I have tried adding this to /etc/modprobe.d/blacklist to no avail.

System currently looks like this. ASUS mini ITX Atom D510 based board with 2x on board SATA, attatched to these are a 160GB OS drive, 400GB data drive. Highpoint RR1740 PCI card with 5x500GB drives in RAID5.

View 3 Replies View Related

Fedora :: RAID5 Failure During The Boot Process, And Can't Find The Array?

Mar 31, 2011

I am able to retrieve data off a RAID5 array. History: My system disk was partitioned to 10 gigs and over time was filled up to 100% locking me out of the GUI (I am a casual user). Now that I am relegated to the CLI, I need help to see if rebuilding the array and keeping the data intact is possible. I have three sata hard drives in the array and now it appears during the boot process, it fails and can't find the array.

I did the following:
mdadm -D /dev/md0

and here's some of the output:

State: active, FAILED, Not Started

And out of three drives, it states that two have failed. I find this hard to believe as I have had no issues until my system disk was full. A coworker today helped me find some files to delete and now have the system disk down to 97%/8.8 gigs yet I can still not enter the GUI due to the array issue.

Out of the three discs, here are the results (taken also from mdadm -D /dev/md0):

Number Major Minor RAID device STATE
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
2 0 0 2 removed

Sorry for providing so little, but I am in (Repair Filesystem) mode and only have local access to the machine meaning all outputs will need to be retyped.

I am able to do anything to the box as its only purpose was media storage and serving.

View 14 Replies View Related

OpenSUSE Install :: RAID5 Array Fails To Automount After Reboot

Apr 26, 2009

I've got a RAID5 array that doesn't want to automount after rebooting. I'm pretty familiar with linux, RAID, and mdadm, and up until now, I've had the RAID5 array working just fine. However, whenever I reboot, the array drops off and won't remount until I manually assemble and then mount the thing. I find this odd because I had everything automounting just fine back in 10.3, and even in 11.0 (I think - not sure on that). Currently, things are working, but I'd really like to not not have to type

Code:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
followed by
Code:
mount /dev/md0 /mnt/data every time I reboot. Even including this in some sort of start-up script seems kludgey... Surely there must be a more elegant way of automatically bringing up a RAID5 array after booting? I'm not sure what information you'll need, so I'm going to go ahead and include as much as I can anticipate...

So having already used the commands:

[Code]....

View 9 Replies View Related

Hardware :: Highpoint Rr1740 Existing RAID5 Array On Fedora 12?

Mar 22, 2010

I'm currently in the process of getting my server moved over from a Server 2003 machine to one based on Fedora 12. My issue is that I have an existing RAID5 array on a rr1740 card. When I install this in the new system each individual drive show up as sdc sdd etc not as one volume. I have tried installing the highpoint driver but I get an error that sata_mv cannon't be unloaded. I have tried adding this to /etc/modprobe.d/blacklist to no avail. System currently looks like this. ASUS mini ITX Atom D510 based board with 2x on board SATA, attatched to these are a 160GB OS drive, 400GB data drive. Highpoint RR1740 PCI card with 5x500GB drives in RAID5.

View 3 Replies View Related

General :: Mount Hard RAID5 Array With Software RAID0 And GPT?

Aug 8, 2009

I couldn't post in General. It said I had insufficient permissions to post there, so, this post does have to do with Windows slightly. Sorry that it's here, but I DID read the rules (I searched, and couldn't find an answer to my problem either)

Anyways, I have a RAID5 array 2.72TB (4x1TB drives) which I used in my windows installation, initialized as GPT, and I used "span" to make the single 2TB partition, and 720GB partition into one partition. I believe that Windows created a software RAID0. Ok, so now I've made the leap away from windows, and am going 100% into Linux (Debian, to be exact) and I'm trying to figure out how to mount this array. I've only done basic web/ftp/ircd server management on Linux before, and never anything with mounting drives. I'm a complete n00b at this stuff.

View 9 Replies View Related

Server :: Recover RAID5 Storage Array DATA Using Xfs_repair?

Jul 5, 2011

I have an Acer Altos EasyStore SATA NAS box that hung, the only way to reboot was to crash the system (unplug it). Upon reboot it was not recognising the hard drives (it wanted to do a destructive reinitialize). Most of the importent data was backed up, however some was overlooked and we'd quite like to get it back. Removing the disks and placing them in a PC with enough SATA bays to cope, and booting with a live linux distribution (System Rescue CD) I can see the 4 drives are not suffering hardware error and that the original partions exist. Using mdadm I can assemble the Arrays without error (seems to be three but the only one I am concerned with is the RAID5 array of about 3TB). /dev/m1p2 mounts as a loopdevice once an offset is entered. In turn this mounts as an XFS parition. However despite df showing the partition almost to be full. ls -l or ls -a on the mount point shows it to be empty!

I got thusfar using a translation from a German language forum, unfortunately I only speak a little German, and the only other English language post on a simlilar matter I found within that site had no replies. The next step was to unmount loop, then run xfs_chack and xfs_repair on the file system. xfs_check returns that there is are a few dir size and offset errors along with link count mismatches. This I would presume normal for a file system that has become slightly corrupted. xfs_repair (version 3.0.3) gets as far as Phase 3 it finds and corrects zerolength entries, offsets on directories and bogus inode numbers. However the final two lines are:

Code:

realloc failed in blkent_append (2671166480 bytes)
zsh: segmentation fault xfs_repair /dev/loop1

A search on the error missing out data size just returns code to generate it, is anybody able to explain what it means? Also remounting hard drive, ls and varients of still do not return anything. Am I missing some thing (root I am logged in with now would have different credentials presumably to root on the NAS box, so how do I get around this)?

View 5 Replies View Related

Software :: RAID5 Fail To Start Array After Power Failure

Apr 23, 2010

I am having a RAID5 with a spare(total 4 disk).then the steps which lead me to a problem:

1. i was doing I/O on the array.
2. i pulled out a drive manually. So the spare drive took care of the failed one and started rebuilding. then
3. in the, mean time i pulled out the power plug of my NAS box.
4. After power up i saw my array was not active(by -D command option of mdadm). then
5. i executed: mdadm --assemble --scan /dev/md0 it gave me

I checked into the linux source and found that bd_claim is a function inside fs/block_dev.c and it failing due to which lock_rdev function (calling bd_claim in md.c) is failing and we are not able to start the array.I don't know why my RAID is not live after power on.
Plese help atleast can i save my data?

View 9 Replies View Related

Slackware :: 13 Boot DVD Doesn't Recognize Adaptec 2005S RAID5 Array

Apr 6, 2010

I'm attempting to upgrade a server from Slackware 10.2 to Slackware 13.0. When I boot with the Slackware 13.0 Install DVD, fdisk will not recognize the RAID5 array; it complains about being unable to seek on /dev/sda. The system is a SuperMicro motherboard with onboard SCSI and an Adaptec 2005S Zero-Channel-RAID card; it works fine in Slackware 10.2 using the dpt_i20 driver.

View 1 Replies View Related

Debian :: Disk Health Warning - Disk Part Of RAID5 Array

Feb 17, 2016

I received the following error when I got home from work today. If this was a windows environment, my first inclination would be to boot off my dvd and then run a chkdsk on the drive to flag any bad sectors that might exist. But there's a complication for me.

Code: Select allThis message was generated by the smartd daemon running on:
   host name:  LinuxDesktop
   DNS domain: [Empty]

The following warning/error was logged by the smartd daemon:
Device: /dev/sdc [SAT], 1 Currently unreadable (pending) sectors
Device info:
WDC WD5000AAKS-65V0A0, S/N:WD-WCAWF2422464, WWN:5-0014ee-157c5db9a, FW:05.01D05, 500 GB
For details see host's SYSLOG.

You can also use the smartctl utility for further investigation.The original message about this issue was sent at Sun Feb 14 13:43:17 2016 MST.Another message will be sent in 24 hours if the problem persists.

From gnome-disks
Code: Select allDisk is OK, 418 bad sectors (28° C / 82° F)

I did a bit of reading and it seems that most people suggest using badblocks to first get a list of badblocks from the drive and save it to a file. Then use e2fsck to then mark the blocks listed in the badblocks file as bad on the hard drive. My problem here is that this drive is part of a RAID5 array that hosts my OS. I wanted to confirm if this was still the correct process.I boot to my Live Debian disk, stop the raid array if it's active. Then run badblocks + e2fsck commands on the drive in question and then reboot.

View 1 Replies View Related

Ubuntu Installation :: Upgrade From 8.10 To 9.04 Killed Laptop?

Feb 11, 2010

I have an ancient laptop, an IBM T-20 with a Pentium 3 and 250 megs of RAM, which works great for blogging and email. I have used Ubuntu on it for nearly four years now. Last night I upgraded from 8.10 to 9.04 and now it won't get to the log-in screen. The BIOS splash screen appears, then the GRUB screen, then the Ubuntu trademark screen appears, with the horizontal progress indicator scrolling from left to right. And when it gets all the way to the right, the screen goes blank. Then the screen flashes twice, as if it's trying to display in a resolution it doesn't support, and then it goes blank again. The HD activity light flashes about every five seconds and the keyboard is unresponsive.

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved