I had a working Ubuntu 9.10 with Windows 7 installed, and i was writing an image of IPCOP [URL] to the USB drive incorrectly, i did mkfs on my primary partation i.e. /dev/sda1 now i am trapped in situation, i was trying to write and usb-hdd image of ipcop on USB Drive, while doing so, i have given following commands
* now the stupid thing i have done, is given /sda1( My HDD ) instead of /sdb1 (USB Drive), it has successfully loaded IPCOP bootable on my primary drive, and my laptop is not working now.
now i want to get my partitions & data back, if anybody have idea about how to get rid of this situation. The good thing i have done, is that, i didn't tried to install anything on the drive after this happened, with hopes that i can get my data back i guess that it can be recovered but confusion is that, which data recovery software to use, and for linux or windows, as i had my data on NTFS filesystem and currently there is Free Space and EXT4 partition.
I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:
[Code]....
My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?
I got two harddisks, sda and sdb. Is it possible to install Debian root into software raid partitions sda2 and sdb1 leaving all other partitions 'normal' (not-raid)? do partitions sda2 and sdb1 need to be exact same size and position?
have set up a server with OS on a 250GB disk (sdd) and 3x 1.5TB drives for a RAID5 storage setup (sda, sdb and sdc).Using webmin I can delete and create a primary partition in each one. But while I can also format sda and sdc, it refuses to do sdb for some reason:Executing command mkfs -t ext4 /dev/sdb1 ; partprobe ..mke2fs 1.41.11 (14-Mar-2010) /dev/sdb1 is apparently in use by the system; will not make a filesystem here!
But I'm not using it simon@ubuntu:~$ lsof | grep /dev/sdb simon@ubuntu:~$ lsof | grep /dev/sdb1
I turned on my laptop today and noticed a load of unfamiliar startup text so I knew something was wrong. Now whenever I startup my laptop, GRUB loads fine but when I try to start Ubuntu it says the following:
Quote:
mount : mounting /dev on /root/dev failed : No such file or directory mount : mounting /sys on /root/sys failed : No such file or directory mount : mounting /proc on /root/proc failed : No such file or directory
[code]....
so all I'm left with is this BusyBox command prompt. I'm on a live Ubuntu CD right now and if I try to mount /dev/sda1 either in the terminal (with the mount command) or with the GUI it just gets stuck.
Quote:
This will provide you with a list of your drives and partitions, you need to pick the one that your root file system is installed to, it will be something like /dev/sda1 but in my case I can't even mount /dev/sda1. When I run sudo fdisk -l heres what I get
thats weird because I rebooted with a live Ubuntu CD and didn't even try to mount /dev/sda1 this time. The instructions were to then try to mount the drive from the GUI so heres what happens when I do that: it attempted to mount it for about a minute then gave me this error message When I tried again heres the error it gave me The problem seems to be that /dev/sda1 can't be mounted for some reason.
I don't know what that error message means and I don't know what else I can do to further diagnose /dev/sda1 and find out why it can't be mounted. Ordinarily I'd just reinstall Ubuntu but I have a couple of lab reports that I had saved on that partition so I'm in trouble if I can't figure out how to access the partition.
EDIT: At the end of that other thread someone recommends to use testdisk to recover the data from the partition. All I really need to do is get those lab reports back but I had them saved inside a Windows 7 guest on Virtual Box. Will this complicate matters a lot for me?
UPDATE: I tried to reinstall Ubuntu and it wouldn't work. Seems this is a bigger problem than I suspected. Does this mean my harddrive is corrupted? I can still use the Windows 7 partition but I take it that the Ubuntu installer not working is a bad sign.
I have a question about LVM. My /dev/sda disk is partitioned into Windows NTFS on sda1, Linux /boot partition on sda2, and the Fedora 10 root (/) LVM partition is on sda3. I have moved my Windows XP to VMware on the Linux system and would like to add the sda1 partition to root LVM group.
I have installed oracle enterprise linux on VM ware with 20 gb allocated to guest OS. Now I want to install oracle apps in the guest Os, so I need to extend the volume. I have extended in Vm , but I have to partition in the guest OS, for that purpose I am using Gparted. But I am unable to extend to sda1. I need to have all the unallocation space allocated to sda1. Here is the screen shot, how can I do that. Right now when in press the command df -h in terminal I am gettig 18 gb as space available for sda1, I want to make it 200 gb, in which I would like to install oracle apps. Check out my screen shot.
I've dual booted Ubuntu and Windows for years now and I've installed OSx86 on a separate drive which Grub2 picked up automagically and everything has been working great -- except I'm out of space. So I bought a 1.5 TB drive and installed win7 into sda1 (100MB NTFS bootloader for windows) and sda2 (50 GB NTFS windows drive). I now want to install two or three flavors of Linux. I'm thinking Ubuntu 10.04, Debian 5.05, and (if I'm bold enough) gentoo. each in 50GB partitions. I've already partitioned the drive a bit putting a 1.2 TB shared NTFS partition at the end (sda10), and a 2 GB swap parition just before that(sda9) My questions are:
(1) can all my linux distro's share that 2GB swap, or does each need it's own dedicated swap partition (installers generally assume you do)?
(2) can I re-partition space in the middle of the drive without messing with windows(sda1&2) and the shared part. (sda10)?
So I am reaching an unfortunate conclusion. I asked this of google and got no straight response so I conclude that it is impossible. taking a look at GParted with my 10.4 boot disk, I see
/dev/sda1 NTFS 74GB boot flag and unallocated unformatted 7.84GB no flag So I assume that that 8gb used to be ubuntu.
In the process of trying to fix things, the computer no longer boots windows.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have two hard drives inside an Ubuntu 9.10 machine, each with a data partition on. What I want to do is as I save files to one data partition partition (or delete them) I want the files copying/deleted automatically on the the other data partition. I do not want to start using RAID or end-of-day incremental backups.
How do I recreate an LVM raid 1 partition, without destroying data on the discs? I have a 650GB data partition which is a raid 1 array with ext3. Two days ago, the system (Ubuntu 9.04) started to refuse to write to it, claiming no space left on device - even if there is ca. 102GB free left, if the disk is 85% full (according to df)! Interestingly, removing a couple of GB did not help, after reboot the disk was again full..
I did the "tune2fs -m 0" trick and then forced file check on next reboot by "sudo touch /forcefsck" .. and the result is that the raid partition is gone. I have the two physical drives /dev/sd*, unmounted, but the /dev/md1 is no longer there. What can I do to re-create it, without losing the data? I realized that I ran tune2fs on the physical partitions /dev/sd* - was I supposed to run it on /dev/md1?
My data are in /dev/sda5 (ext3). There's an empty partition /dev/sdb5 (ext3). I want /dev/sdb5 to be a mirror image of /dev/sda5, the command to invoke should be :
After an install of suse 11.4, one of my drives raid 0 (ichr9 intel) does not mount and is not recognized as being formatted in NTSF, while the other unit raid 0 (ichr9) is recognized without problems?
The ext4 file system creation in partition #1 of Serial ATA RAID isw_fjidifbhi_Volume0 (mirror) failed. Not sure what's wrong? How do I solve this? This happens after I enter my login details and click next. Then it fails on the install screen with this error.
Been attempting to install ubuntu on my Intel fake raid on a new PC.The installer sees the raid, but errors out:Quote:The ext4 file system creation in partition #1 of Serial ATA RAID isw_ehbgbddej_Volume0 (mirror) failed.This seem to be the EXACT issue described in this bug report:I read all the way down this bug report, and at the bottom it says:Quote:Launchpad Janitor on 2010-06-25Changed in parted (Ubuntu Lucid):status: Fix Committed → Fix ReleasedA few weeks ago, a fix was "committed" and "released". I had been working with an earlier 10.04 ISO, so I downloaded 10.04 from the web site today. Still the same problem.Maybe I'm just reading the bug report wrong? If it says the fix is committed and released, I would naively assume that it is fixed. Do I need to patch to apply the fix or something? Doesn't seem likely.
I am unable to hibernate my computer while using Ubuntu and I figured out the reason--Ubuntu is not using my swap partition. I would follow the existing tutorials on setting up a swap partition after installing Ubuntu, but since the volume uses hardware RAID 0, the swap partition is not assigned a /dev/ entry (like /dev/sdxx) and I am not sure how I can mount it.
I am trying to grow my array to make full use of my drives. I have a raid 6 using mdadm on my home server. The array used to consist of 6 1.5TB drives when it was created. Since then I've been replacing the 1.5TB drives with new 2TB drives. And now I have replaced the last 2 drives.
When I added the first 3 disks I did not use the whole disk for the raid partition but rather made it the same size as the 1.5 partitions. As it turns out this may have been a bad idea. (But it gave me another 3 partitions of 500GB that I turned into 1 disk using mhddfs.)
Now I'm trying to grow the array. I've been testing in a virtual enviroment on how to do it but I cannot find another method than this :
1) fail 1 disk. 2) re-partition the disk with the size of the whole disk. 3) re-add the drive as a spare. 4) start the now degraded array to let it resync. 5) wait quite a while. (aprox 5 hours) 6) start again from step 1 for the other disks. 7) use mdadm with grow command to enlarge the array use resize2fs to fill array to max size
Now although since this is a raid 6, I keep some redundancy but I still worry about degrading the array so manny times and rebuilding the whole thing. I mean I read the thing out 3 or 4 times over doing this.
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.
The RAID level 1 interested me because of its redundancy in both drives. And I successfully made it in a couple of partitions. But, I always did it after Linux installation. Then, I create both partitions, use 'mdadm' to create raidtab and RAID device (md0, for example) and then I format the RAID device with 'mkfs' and mount it.
Until there, it's all OK.
But my problem is to mirror ALL the hard disk, inclusive root partition. To do that, I guess I need no Linux installation, then create the RAID (md0, raidtab, etc) and after that install Linux in RAID device created.
But I'm new in Linux world and I have no idea how to do that.
I use Debian Lenny, so I need a solution that uses only the first DVD of this distribution.