Ubuntu :: Partitions/drives Sometimes Not Showing?
May 14, 2011
I upgraded to Ubuntu 11.04 from 10.10 about a week or so ago and since the upgrade when I boot into Ubuntu (I am running a dual boot Ubuntu and win7) other HDD's and partitions often do not show up anywhere on my system. For example I have an ntfs partition that I store music on so I can access it on either windows or ubuntu, as well as other partitions. When this issue occurs even my dvd drive appears not to exist. I have to reboot my machine sometimes 4 or 5 times before the drives/partitions show again. I had no issue like this before running the upgrade or using earlier versions of Ubuntu.
I want to re-setup a raid array on some older drives using mdadm. That involved adding a single partition on 4 of my drives (3 older, 1 brand new - a replacement drive), using mdadm to create the array, etc. However, upon restarting the box, the 3 older drives do not show up as having partitions and hence mdadm cannot immediately start the array because 'there are not enough disks to start'.For example, if I do an ls command before I restart, I will see:
Code: ls -ltr /dev/sd*1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Objective: To dual boot Ubuntu 11.04 and Windows 7 Problem: When I use Live USB and try to install Ubuntu 11.04, two of my partitions are not showing up.
System: C2D 3.1 Ghz 6 Gb of Ram Nvidia 8800 gt
More info: I have a self built machine which is currently running windows 7 64 bit. 4 hard drives 2 x 1 TB (mirrored) 1 x 500 Gb (single) 1 x 320* Gb (Windows 7 is installed here) <--both do not show up in the installer *Split into 247 Gb and 73.4 Gb partitions
I have what is a weird problem, at least I think it is. I deleted some files and now my partitions do not show up in Gparted. Instead, the entire disk shows up as unallocated space. I am still able to run every partition, one of which is ubuntu and another which is Windows without any other apparent problems. here is my fdisk -l:
i just installed ubuntu 10.10 im using it now but my 2nd partition that got all my backed up files on is not showin up so i ask how do i get to use it ? and another thing is when i was installing the backup partition said it has no MB used ?
There are 4 drives in my PC. now i have installed natty on it and I can't see other drives except the filesystem. Any tool that can make all my partitions visible. I used live CD to install Ubuntu 11.4
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
Complete Linux newbie here I've got Ubuntu up and running, and I looked in Storage Device Manager and noticed something strange. My computer has one 80 gig drive with two partitions (XP and Ubuntu) and one 500 gig drive where I store my data. Here's my Fdisk -l:
Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
I have a Centos 5.5 system with 2* 250 gig sata physical drives, sda and sdb. Each drive has a linux raid boot partition and a Linux raid LVM partition. Both pairs of partitions are set up with raid 1 mirroring. I want to add more data capacity - and I propose to add a second pair of physical drives - this time 1.5 terabyte drives presumably sdc and sdd. I assume I can just plug in the new hardware - reboot the system and set up the new partitions, raid arrays and LVMs on the live system. My first question:
1) Is there any danger - that adding these drives to arbitrary sata ports on the motherboard will cause the re-enumeration of the "sdx" series in such a way that the system will get confused about where to find the existing raid components and/or the boot or root file-systems? If anyone can point me to a tutorial on how the enumeration of the "sdx" sequence works and how the system finds the raid arrays and root file-system at boot time
2) I intend to use the majority of the new raid array as an LVM "Data Volume" to isolate "data" from "system" files for backup and maintenance purposes. Is there any merit in creating "alternate" boot partitions and "alternate" root file-systems on the new drives so that the system can be backed up there periodically? The intent here is to boot from the newer partition in the event of a corruption or other failure of the current boot or root file-system. If this is a good idea - how would the system know where to find the root file-system if the original one gets corrupted. i.e. At boot time - how does the system know what root file-system to use and where to find it?
3) If I create new LVM /raid partitions on the new drives - should the new LVM be part of the same "volgroup" - or would it be better to make it a separate "volgroup"? What are the issues to consider in making that decision?
I have quite a lot of testing operating systems installed. Some with their own home or boot partition. Now my places section in nautilus is an utter MESS. I was not able to find any working solution how to hide these partitions.
When I tried to install Ubuntu 9.10 or 10.04 (from CD or USB drive), and selected manual partitioning, the installer would not show all my drives.
However, when booting the life CD/USB, gparted or the Disk Utility did recognize all drives and partitions.
It turned out that one of my drives was marked as RAID partition, although I never used RAID!
Here the symptom:
When you run the installer and select "manual partitioning", the resulting list of drives and partitions is incomplete. In my example it was:
sda - sda1 sdc - sdc1
You may have multiple drives with the RAID metadata on it. In that case you need to repeat the above command for all those drives. Just make sure you don't wipe out your existing RAID, if you have one.
Reboot the system and see if it works.
P.S.: Also check your BIOS settings - do you have drives configured as RAID?
I have a load of partitioned hard drives and even though the are refrenced in fstab I still get Bookmark Icons and they are also visable in 'Computer'How do you remove these icon?I would also like the ones not refrenced in fstab to only be mountable via password - lucid seems to just mount when you click, I did change this as in another post but it also made USB keys need a password
Code: gksu gedit /var/lib/polkit-1/localauthority/10-vendor.d/com.ubuntu.desktop.pkla change the ResultActive=yes to ResultActive=auth_admin_keep
I try installing Ubuntu 9.10 in my computer On reaching the prepare disk section it shows me that there is no operating system installed yet i have XP installed. it is also not showing the partitions in my computer yet i have 4 partitions. i cant do without ubuntu in my machine.
I had a 500gb hard drive that I wanted to use on my Ubuntu system as a media storage drive. The drive originally had two partitions on it,one was a Dell Recovery partition and the other was a Windows Vista partition. Using the Palimpsest Drive Utility that comes with Ubuntu, I deleted both partitions,created a Ext3 partition using 100% of the space and copied my data to the drive. After I finally got fstab to load the drive, I found another problem. First of all, when Grub loads, two options it offers are:
Windows Vista (loader) on sdc1 Windows Vista (loader) on sdc3
Aside from that, 100% of the drive is not being used by the Ext3 partition.It is showing 434.6gb available on the drive. Fdisk is not showing any other partitions on this drive, so A) why are the Windows loader options showing up under Grub and B) why do I not have 500gb available?Here is a copy of the output fdisk -l:
Code: Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes[code]....
I have been trying to upgrade my server to Ubuntu 10.04 since it has come out, but I hit a roadblock with my hardware RAID I have two JBODs that work perfectly in Ubuntu 9.10 x64 - but show as seperate, unformatted partitions (one per hdd) in Ubuntu 10.04 x64. Here's the relevant portion of my fstab:
I've been through a lot of the posts already, but nothing seems to solve my problem. I have Ubuntu 9.10 and Windows 7 dual installed (Windows was installed first). Everything has been working fine until a few weeks ago when I accidentally left a USB drive plugged in when I restarted my computer from Windows.
Ever since then, whenever I have restarted my computer from Windows grub2 has failed (it does not fail when I restart from Ubuntu). I get a varying message like Grub loading. The symbol ' ' not found. Aborted. Press any key..where the part between the single quotes is usually different each time. When this happens I have to reinstall Grub2 from a live disk, which is becoming a bit of a pain.
I've been reading around, but I don't really have a great understanding yet of how hard drives and partitions work in general, and so I haven't been able to work out what the source of this problem is.
I have 2 drives formatted NTFS, which I'm mounting with /etc/fstab to ~/Movies/ and ~/Music/ and an EXT4 partition on my primary drive for games, mounted to ~/Roms/ and I would like for these drives to NOT show up in the side panel of nautilus.
I've been doing some looking around, and what I've found so far is that supposedly if you mount a partition/drive somewhere besides /media/ nautilus will ignore it. I'm finding this not to be the case, and it's driving me bonkers. here's my fstab:
Code: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5).
Is there a way that I can get Dolphin to mount partitions? When I try to, it gives me an error on the bottom of the window saying I don't have permission to. I also can't mount partitions in other programs like Amarok, because of the same issue.
Error Message: An error occurred while accessing 'Windows 7', the system responded: org.freedesktop.Hal.Device.PermissionDeniedByPolicy: org.freeDesktop.hal.storage.mount-fixed auth_admin_keep_always <-- (action, result)
I've been struggling with this one for a while - I have three SATA hard drives installed on my system: /dev/sda - an 80 GB disk with three partitions, one NTFS for WinXP, one ext4 for Fedora 11 x86_64, and a boot partition /dev/sdb - a 250 GB disk with one partition, ext4 /dev/sdc - a 250 GB disk with one partition, ntfs
I can mount any partition on /dev/sda without problems - everything works exactly as expected. Attempting to mount a partition from one of the other disks results in something like the following (this is for sdc1):
Code: [User@machine ~]$ sudo mount -t ntfs /dev/sdc1 /mnt/shared [sudo] password for User: ntfs-3g: Failed to access volume '/dev/sdc1': No such file or directory ntfs-3g 2009.11.14 integrated FUSE 27 - Third Generation NTFS Driver XATTRS are on, POSIX ACLS are off
Copyright (C) 2005-2007 Yura Pakhuchiy Copyright (C) 2006-2009 Szabolcs Szakacsits Copyright (C) 2007-2009 Jean-Pierre Andre Copyright (C) 2009 Erik Larsson
Usage: ntfs-3g [-o option[,...]] <device|image_file> <mount_point> Options: ro (read-only mount), remove_hiberfile, uid=, gid=, umask=, fmask=, dmask=, streams_interface=. Please see the details in the manual (type: man ntfs-3g). Example: ntfs-3g /dev/sda1 /mnt/windows Ntfs-3g news, support and information: http://ntfs-3g.org The /mnt/shared directory is created; the failed to access error is related to the disk.
Here is the output from fdisk: Code: [User@machine ~]$ sudo fdisk -l Disk /dev/sdb: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0f970f96
Device Boot Start End Blocks Id System /dev/sdb1 1 30401 244196001 8e Linux LVM Disk /dev/sdc: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1050104f
Device Boot Start End Blocks Id System /dev/sdc1 1 30401 244196001 7 HPFS/NTFS Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8e538e53
Device Boot Start End Blocks Id System /dev/sda1 1 6375 51200000 7 HPFS/NTFS /dev/sda2 * 6375 6400 204800 83 Linux /dev/sda3 6400 9729 26743361 8e Linux LVM
Disk /dev/dm-0: 21.4 GB, 21428699136 bytes 255 heads, 63 sectors/track, 2605 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 5955 MB, 5955911680 bytes 255 heads, 63 sectors/track, 724 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/dm-1 doesn't contain a valid partition table Disk /dev/dm-2: 250.0 GB, 250059348992 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0f970f96
Device Boot Start End Blocks Id System /dev/dm-2p1 1 30401 244196001 8e Linux LVM Disk /dev/dm-3: 250.0 GB, 250056705024 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/dm-3 doesn't contain a valid partition table
And also from blkid (this does not match the output from above - but I don't know if this is actually related to the problem or how to fix it): Code: [User@machine ~]$ sudo blkid /dev/sda1: UUID="506412E06412C91C" TYPE="ntfs" /dev/sda2: UUID="811bf259-33d5-4db2-9851-e93b47dcbcc8" SEC_TYPE="ext2" TYPE="ext3" /dev/sda3: UUID="U3AJLH-Lhm1-b0lf-HDX9-ZK1V-ezqU-sb0YGQ" TYPE="lvm2pv" /dev/dm-0: UUID="f5733171-0753-4f53-834b-cc693ffb0aed" TYPE="ext4" /dev/dm-1: TYPE="swap" /dev/mapper/vg_machine-lv_root: UUID="f5733171-0753-4f53-834b-cc693ffb0aed" TYPE="ext4" /dev/mapper/vg_machine-lv_swap: TYPE="swap"
Running:Fedora 13kernel 126.96.36.199-56.fc13.i686desktop: xfce 4.6.2Problem: I have a 1TB drive (SATA) with 2 partitions. The partitions/volumes don't show on the desktop.The volume names are:MASSIVEBACKUP.
Facts:1. When I login using GNOME, both volumes (MASSIVE & BACKUP) are listed on my desktop2. My /etc/fstab has long since been edited to reflect these drives3. Using XFCE: Although the volumes don't show on the desktop, they are indeed mounted. I can see them when performing a "df -h" as well as when opening "File System" and drilling to /media - they are both there. I can also access the contents of both volumes within "File System" (using Thunar).4. Using XFCE: I can access both volumes via command line5. Using XFCE: External drives are not affected. I have one external USB drive that shows and when I plug my Android phone, it too shows on the desktop.
After some exploring, I installed the following:Oct 01 22:17:29 Installed: xfce4-mount-plugin-0.5.5-4.fc12.i686I then added "Mount Devices" to my panel, but...as you've probably figured out - they're already mounted.
I want to change one of our poweredge 6450 servers over to Linux, specifically CentOS, however, I cannot seem to install the operating system. When I boot to setup, go through the first couple of pages and then go to the page that asks you where to install CentOS, no drives are shown in the list, and I can't continue. I'm guessing there are missing drivers for the SCSI controller or something along that lines, but I'm not sure, and I can't find any answers on Google either.
I have 4 virtual disk on AX4-5i. RHEL for some reason shows 16 partitions in /proc/partitions. I got lot-lot of IO errors in dmesg. Some of those 16 partitions "work", others just give (on using fdisk for example): Unable to read /dev/sdb. On EMC I have 4 ports, so it might be somehow releated to this.
[root@db ~]# /etc/init.d/iscsi restart Logging out of session [sid: 1, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a0, portal: 192.168.10.101,3260] Logging out of session [sid: 2, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b0, portal: 192.168.10.103,3260] Logging out of session [sid: 3, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a1, portal: 192.168.10.102,3260] Logging out of session [sid: 4, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b1, portal: 192.168.10.104,3260] Logout of [sid: 1, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a0, portal: 192.168.10.101,3260]: successful Logout of [sid: 2, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b0, portal: 192.168.10.103,3260]: successful Logout of [sid: 3, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a1, portal: 192.168.10.102,3260]: successful Logout of [sid: 4, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b1, portal: 192.168.10.104,3260]: successful Stopping iSCSI daemon: iscsid dead but pid file exists [OK] Starting iSCSI daemon: [OK] [OK] Setting up iSCSI targets: Logging in to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a0, portal: 192.168.10.101,3260] Logging in to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b0, portal: 192.168.10.103,3260] Logging in to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a1, portal: 192.168.10.102,3260] Logging in to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b1, portal: 192.168.10.104,3260] Login to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a0, portal: 192.168.10.101,3260]: successful Login to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b0, portal: 192.168.10.103,3260]: successful Login to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.a1, portal: 192.168.10.102,3260]: successful Login to [iface: default, target: iqn.1992-04.com.emc:cx.sl7e2101800014.b1, portal: 192.168.10.104,3260]: successful [OK] [root@db ~]#
I've googled and read docs but got no idea what to do. Any experiences with EMC/RHEL/iSCSI - I just don't understand where those 16 (and half of them not working/giving lotof errors) come from? For example Citrix Xen on similar network conf can see/use same EMC thoughr iSCSI nicely. SO there should'nt be network/auth issues (not using CHAP btw). I got feeling that problem is on RHEL iscsi conf side.
I recently installed Fedora 12 for use with Amahi HDA. Before installing on the Hard drive I used the LIVE CD to test it out. While using the LIVE CD I could see all my HDD's. My file system, my 2nd Hard Drive, and my Raid 0 Configuration. (2 250GB drives) and could browse all my files on those drives.
After installing the full version on my hard drive, my RAID drives are showing up as seperate drives. I have a Asus P4P800 board using hte SATA raid. I know its FAKERaid and not a true hardware raid.
My goal is to restore the Raid in Fedora and make those drives active. However, i dont want to lose any of the data on those drives. To make sure I wasn't an idiot, i rebooted withthe LIVE CD again and verified that I could see the Raid Array.