Installation :: Ubuntu Can't Find Drives / Get That?
May 16, 2010
I have two Sata 3 drives C and D, I have window 7 on C. I am trying to install Ubuntu and duel-boot My problem is when I load into the live disk the file system can only find the 1.8 gigs left on the live CD. If i run the installer it says no root file system is defined please correct this in the partitioning menu.
The problem is that the menu is completely empty. If I load Gparted in the live CD then it shows nothing at all except a message that says something like no device detected. Obviously the problem is that the installer can't find any were to put Ubuntu. Could windows 7 possibly be the problem?
running Release 10.04(lucid) Kernal linux 2.6.32-25-generic Gnome 2.30.2
installation of ubuntu before updating process because on lost audio sound due to drive issues ect.all 2 hard drives were recognizable. want to put them on my desktop. After upgrade that are not visible only if in type sudo fdisk -1
Disk /dev/sdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
tried using pmount but no joy ! how to find via something like on Xp control panel or my computer ?
I had installed windows XP and then Ubuntu a few months ago. I was mostly using Ubuntu only. My Ubuntu is up to date. Windows XP got the blue screen and i had to re-install it. So, i used the Disk Utility and formatted my C-drive as NTFS with a boot flag.
After that, when i attempted to install windows XP on my C-Drive that i just formatted, Windows Setup is unable to recognize any drives! I really don`t want to uninstall Ubuntu or format my whole HDD, just to install windows XP. But i also want to install windows XP as i have to run some applications in it!.
First time linux user, am trying to install a fresh full install of Fedora 12 dvd i686 version. I have two identical sata drives, which fedora fails to identify. Have reset the bios, changed settings in the bios, still not finding them. I have an asus av8-x motherboard, with a athlon dual core processer.
The Fedora installer won't display my two SATA hard drives. I've tried both the x86_64 live CD and DVD. On the live CD, fdisk -l displayed nothing. However, if I click "Specialized Storage Devices" a devices shows up as "BIOS RAID set (stripe)" with a capacity equal to both my hard drives. I don't even have RAID enabled in BIOS - it is set to AHCI. Other os installers display the hard drive correctly.
Specs: 2x 640GB western digital caviar blacks ASUS M4A78T-E 790GX motherboard
i have recently setup and installed Ubuntu 9.04 on a virtulal drive usingVMWare 6.04, installed the desktop gui as well, I need to add other drives for data and loggng, which I did in the VMWare side. I can see the 2 drives in ubuntu, but can not access them, I get he unable to mount location when I try. How can resolve this please as I need these to virtual drives to be used as data drives.
Earlier I had two physical hard drives in my computer, one with Windows and one with Ubuntu. Now I have a new computer and have installed these hard drives in it. I run Windows 7, and I can find the Windows disk, but not the Ubuntu disk. This doesn't surprise me, as Ubuntu is another filesystem, however, earlier I could format it with a partition manager, but now I didn't even find it with that!
Running -current (13.37) 32 bit. When I start k3b I get a message No optical drive foundK3b did not find any optical device in your system.Solution: Make sure HAL d�mon is running, it is used by K3b for finding devices.Hald is running -
I recently installed Fedora 11 x86_64 (dual boot with XP) and am having difficulty finding two of my three hard drives to mount them. This is my setup: 80 GB Hard drive (boot drive) with two partitions, one for XP (NTFS) and one for F11 (ext4). 2x250 GB Hard drives, one is formatted with NTFS, the other one has yet to be formatted (my plan is to use ext4).
All of my drives are SATA, on the same nVidia controller. After the install, I can see only the 80 GB hard drive (both partitions). What do I need to do to find the other two drives? During the install, it called the partitions /dev/sda0, sda1, sda2 and sda3, but I no longer see these drives. If I knew where the drives were I could mount them, but my systems just isn't seeing the drives.
This is the output of df:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_user-lv_root
I just finished a build of a new GNU/Linux boxen with openSUSE 11.2. I have a MSI Big Bang Xpower X58 motherboard which has two SATA controller chips, one is the standard Intel ICH10R chip for SATA 3.0 Gb/s and one is the Marvell 9128 chip for SATA 6.0 Gb/s. The BIOS recognizes the Western Digital Caviar Black 6.0 Gb/s drive on either SATA controller chips, /however/ I am unable to install (and boot) when the drive is connected to the Marvell controlled ports. As you can guess, I'd like to boot from the faster interface!
1. The BIOS allows me to select the Western Digital drive as a secondary boot device, so I know, at least at the BIOS level, it's there. This is true whether I have the drive connected to the Intel or Marvell ports. (The DVD drive is the primary boot device.)
2. When trying to install openSUSE 11.2 from DVD, the installer says that it can't find any hard drives on my system when I have the drive connected to the Marvell port. The installer finds the drive fine when it is connected to the Intel port.
3. I installed everything with the drive connected to the Intel port. I switched the drive to the Marvell port afterward and the system refuses to boot completely, stalling at some point where it starts to look for other filesystem partitions. This led me to conclude that perhaps the problem is with openSUSE and not hardware weirdness with the system having two separate SATA controllers?
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
I've used it once before but got fed up with the boot asking me everytime I turned my laptop on because I wasn't using it enough. I have Windows 7 on drive C . I want to keep it on drive C. I have several 1.5TB+ drives, and one of them is not being used. I want to dedicate it to Ubuntu, and be able to do a dual boot with my Windows 7 install. Is this possible? If it is, what about when this drive is not connected to my laptop? Will that mess up the boot process?
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
This is yet another "I can't see my new disk" thread - sorry, I've had a good look at others and have tried the obvious but I'm stuck. I have a working Ubuntu 9.04 installation - 3 SATA drives, in a home made box on an Intel DG965WH mobo. It's been working well for a long time (which is why I've not felt the need to upgrade the OS) but I'm running out of disc space so I've just bought 2 new 2TB drives (Seagate ST2000DL003).
The board supports up to 6 drives - in fact in previous configs I have had 6 running before.
Now, however: - the BIOS is seeing the new discs (so they are connected and powered) - new block devices /dev/sdd and /dev/sde are being created, but I can't use them... - gparted lists just the three old drives, not the two new ones, in its menus - sudo fdisk /dev/sdd returns "Unable to read /dev/sdd"
I've tried disconnecting one of the drives - /dev/sde goes away as you'd expect but /dev/sdd still won't work In the BIOS, I have tried different settings - "Legacy" and "Native", makes no difference As far as I can see, 2TB shouldn't be too big to worry the OS.
I suspect this is not new but I just can't find where it was treated. Maybe someone can give me a good lead.I just want to prevent certain users from accessing CD/DVD drives and all external drives. They should be able to mount their home directories and move around within the OS but they shouldn't be able to move data away from the PC. Any Clues?
So, at the moment I have a 7TB LVM with 1 group and one logical volume. In all honesty I don't back up this information. It is filled with data that I can "afford" to lose, but... would rather not. How do LVMs fail? If I lose a 1.5TB drive that is part of the LVM does that mean at most I could lose 1.5TB of data? Or can files span more than one drive? if so, would it just be one file what would span two drives? or could there be many files that span multiple drives drives? Essentially. I'm just curious, in a general, in a high level sense about LVM safety. What are the risks that are involved?
Edit: what happens if I boot up the computer with a drive missing from the lvm? Is there a first primary drive?
How do i install ubuntu so that only the OS itself (hope you get what i mean) is on my C: drive and other files like applications etc is on my D:drive? I use a raptor on 74gb which is faster but smaller than a normal hdd that will also contain windows 7 and crunchbang so I just want the main parts of the OS on that hdd. If it's not possible on install, which maps should be moved from the C: drive to the D: drive if i want all applications and such on the D: drive? And how do i configure so futur installations from synaptic or the apt-get command install files on the D
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error: fsck: fsck.isw_raid_member: not found fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
After a bit of a rough install, I got 10.04 up and running on an Intel D845GRG motherboard. All seems to be working fine except for USB flash drives. My USB mouse and keyboard work fine, but the two sticks I have (Kingston and PQI) will not mount.
Having a little problem with 10.04 installation. I have two hard drives installed on my PC. One that had Hardy Heron and one data. When the install program launches from a CD boot, it fails and drops me to a live session to check out the problem. I can see both drives and mount them but if I then launch the installer, it does not give either as an option for installation.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.