I have a 2 TB disk in an external SATA dock, formatted with a single ext3 (Linux) partition, which doesn't show up in the Windows 7 Computer Management->Disk Management utility, even as a raw/blank disk. I've verified that there's nothing wrong with the disk by connecting it to my Linux machine and mounting it, and I've verified that the dock is functioning properly by connecting a different FAT32-formatted disk, which mounts flawlessly as expected.I realize that I can't actually read the ext3 partition without additional software (e.g., Ext3IFS), but why doesn't the disk show up at all? Is there some sort of stupid anti-Linux filter built in? Is there any way to force Windows to recognize the disk, so that I can at the very least use direct block access with it?
Background: I want to clone an identical 2 TB disk onto this one. Due to my hardware layout, it's much easier to have the source disk attached to one machine and the destination disk connected to another, and do the clone over the network (the network is not a bottleneck with switched gigabit ethernet), than it is to hook them both up to one machine.(1) I did this once before when both machines were running Linux, but I've since upgraded the destination machine and decided to switch back to Windows for regular desktop use. I've got Cygwin installed, and have verified that the same basic method (dd + nc) will work, but I can't do anything if Windows doesn't even consider the destination disk to exist.I only have one eSATA port on each machine. Opening them up just to do this clone is a rather large annoyance. Also, since this is my backup disk, I'd like to eventually automate the cloning from the active disk to another one that I regularly swap with a third disk that I store off-site.
I don't understand disk sizes in Linux. I have a 500GB drive. It's ext4. I have run "tune2fs -m 0" on it to reserve the amount of space reserved for root to 0.
I'm using Ubuntu 10.04 that comes with a Disk Utility. When I run "System->Administration->Disk Utility (palimpsest)" the disk shows up as 500GB (see picture). But when I run df -h it shows up as 459GB. So, I don't understand the discrepancy.
When I run df I get the following:
Question: Why is Disk Utility showing me something different than "df"?
I am on F15 32-bit with GNOME 3. I keep getting "A Hard Disk Is Failing" warnings from the Disk Utility, very frequently. Is this a serious issue? Because I knew this to be a bug in Palimpsest DU back in F13/14. Also how can I disable any notifications from this application?
I'm working on setting up a new NAS. I installed Karmic desktop on a 160 GB HD using the default settings.
Now I've added three 1TB drives and want to make them a RAID-5 array with LVM on that, and 1 ext4 partition. I want to use LVM so I can add drives and expand the array later.
So far I've been using Disk Utility (Palimpsest Disk Utility) and it's been great! A wonderful addition to Karmic! I got the RAID-5 array setup with no problems using disk utility. So now I have a 2000 GB raid-5 array setup in Disk Utility and I need to get LVM setup.
Problem is: I don't see any sign of LVM in Disk Utility. I've been googling all night and I can't find any documentation for setting up LVM in Disk Utility, just people saying that it's supported.
I tried installing the lvm2 package, rebooting, and then looking around again. No luck.
So, what am I missing? Should there be LVM options in Disk Utility? Where is it? Is there a better/easier way to configure lvm?
point me in the direction to get a step by step guide to setting up a Raid 5 using the Disk Utility and 3 spare drives? I have the main OS files on a 80gig drive and I would like to mount the 3 drives as Raid 5.Just shooting in the dark now.. Screen shot is attached. [URL]...
Has anybody ever used Disk Utility to set up software RAID? Here I am running terminal commands (I'm a terminal junkie) and I just happen to stumble across instructions that indicate "Or you can just set it up through Disk Utility."
Sure enough in disk utility, it looks like all of the configurable options are there. It makes me wonder, though... is this kind of GUI functionality something that isn't really solid? Or does it operate predictably and effectively?
When I use disk utility to expand my RAID array it creates a partition on my 1.5TB drive which it would like to add to the RAID 5.
However, none of the drive existing on the RAID are partitioned so what I think has happened is the partition itself has created a difference of about 2 million bytes smaller than the others and thus unable to add the component.
How can I specify the exact bytes for my hard drive partition so that I can add this to the array?
I have an annoying problem with the Gnome disk utility. Whenever I want to mount a file system it is mounted as a removable disk in /media. For instance if I want to mount a raid array /dev/md0 to a mountpoint /music it is mounted as /media/music. That's not how I want it, I want it to be mounted to the desired mountpoint which is located directly on the root.
I found it is impossible to format partition during Ubuntu 10.04 installation. My storage configuration is as following. 1TB (500GB X 2) AHCI RAID 0 (it is said fake raid) and covers below 4 partitions.
/dev/mapper/pdc_dgbbagea1 9621688 5872752 3260168 65% / /dev/mapper/pdc_dgbbagea4 945587172 95673304 802259056 11% /home /dev/mapper/pdc_dgbbagea3 9698380 1363364 7846240 15% /opt Partition 2 is swap partition and root partition is ext3 original.
Since there is no enough space for upgrading, I try to format root partition and install a complete pure new OS. After I booting up system from Live or Alternative disk, I try to switch root partition from file system ext3 to ext4 and format it. However, Formating process always get failed after couple trying. Even I quit installation and use tools "Disk Utility" to check and adjust partition information. It reports device is busy.
I have a HDD with OS Win with three partitions NTFS.I installed Ubuntu 10.10 on new partition, and I left the old partitions on the disk, because there are a lot of my personal data.When I was looking for how to mount partitions on startup, I was fortuitously to Palimpsest Disk Utility selecting the checkbox on sda2 as a boot, and apply. But I saw that it was wrong and took the check back. And after this was damaged NTFS on the partition sda2. Windows shows the partition as RAW.
My system locked up while copying files last night. My RAID array will not start. I did verify my UUID's. (Lesson learned.) I do not understand a few things.1. Why do different drives show "active sync" on different drives? 2. Why does "Disk Utility" tell me the RAID is not running and when I try to assemble the RAID, mdadm returns: mdadm: device /dev/md0 already active - cannot assemble itWhen I try to start the RAID using "Disk Utility":
Code: Error assembling array: mdadm exited with exit code 1: mdadm: cannot open device /dev/sdd1: Device or resource busy mdadm: /dev/sdd1 has no superblock - assembly aborted So, I examine sdd1: Code: sudo mdadm -E /dev/sdd1
Palimpsest Disk Utility was working fine able to read the SMART status of my hard drives till I rebooted my machine. After rebooting Palimpsest Disk Utility reports SMART is not available. Any way to get it to start working again?
I have what I think are hybrid GUID/MBR disks that I created by splitting already MBR/NTFS disks via GParted, leaving unallocated space, then creating HFS partitions within OS X from the unallocated space on them.I want to delete those HFS partitions and re-extend the NTFS on them, but I can't because GParted sees the disk as somehow unchangeable; I assume OS X has done something to them.I now can't extend or do anything to the disks via the OS X Disk Utility OR GParted. What can I do?
I run Ubuntu Netbook 10.04 on my EeePC 1005HA. I'm going to get a SSD for it eventually, but I can't afford one right now so it's running from a 200GB hard disk I scavenged off a dead laptop.
I went in power management and set the option that says "spin down hard drives whenever possible", but this accomplished a whole lot of nothing - whenever the computer is on, the drive's spinning. I ran hdparm -y and the drive clicked off, and then promptly spun back up after a few seconds. Iotop shows occasional tiny bursts of activity from "jdb2/sda1-8", which I don't really know how to interpret, but I don't have anything weird installed so I'm assuming this is normal system operation.
Now, what I need is some sort of application, utility, command - anything - that forces the computer to keep all filesystem changes in RAM with the drive shut down; every five/ten minutes or so (this would hopefully be configurable) it spins up the drive, dumps the filesystem changes to it, and spins it down again.
I realize this presents data loss risks related to crashing and poweroffs when the cache hasn't been dumped to disk, but I'm willing to risk it as Linux never really crashes at all, and since it's a netbook power failures won't cause unexpected shutdowns.
I had installed Ubuntu 10.04 on dual boot mode with Windows 7, I had to migrate from windows 7 to XP & again to windows 7. Each time after installation I updated grub. Last time while updating Grub PC went Down due to power failure. I update grub after the supply was resumed and it was successful.
On login I am facing a weird problem, My windows Drive are not appearing in Ubuntu. To access the drives I have to Plug a Pen Drive and access them first from an Application otherwise the don't even appear in Places drop-down list. If I don't plug Pen Drive they don't appear in Applications.
The other problem is that Disk utility shows space unallocated, allocated & Free above the physical capacity of Hard disk. My Hard disk is of 160 GB, Disk utility Shows Unallocated Space 18446744 TB. The Default Partitions made by me are 26GB, 14GB & others of 40 GB each.
Gnome-disk-utility doesn't show filesystem type, mount point, filesystem label, size¦ of my / filesystem.
I am running Debian Squeeze using lvm2. I have two HDDs and each has one primary partition, which are used as PVs. Having two VGs, each VG has it's own PV.
There are some LVs and all of them except the LV holding the swap space are formatted with XFS. Now gnome-disk-utility shows everything about my /home LV, another LV containing a whole Ubuntu installation,¦ only the / LV (and Swap LV, but I don't know what it is supposed to show there) is/are missing nearly all information.
Otherwise the system is running perfectly well and the Debian / LV is shown normally in Ubuntu's disk-utlity, as well as all other LVs.
DebianCopy is a copy of my Debian installation (different fs label and UUID). DebianII (again different UUID and label) is a copy too, but there I tried out newer (testing) versions of udisks/lvm2/udev and right after the upgrade it showd everything as it should with the additional advantage of the newer udisks-version showing my VGs, but after a reboot it showed the same behaviour as before or even worse, because the information about other LVs was missing too.
In the end I even modified the fstab. Originally it contained the /dev/mapper/vgbay... entries and I replaced them with LABEL=... and finally with UUID=..., but it didn't make any difference either.
Before doing a clonezilla project I opened Disk utility shows the first partition labeled as sdb2, then second partition as sdb1is this normal? I will add that this is a windows drive, but I wanted to back it up before installing debian to it. How will the disk partition labeling affect partition naming in debian?