Fedora :: Can't Access Any Of The Other Hard Drives From The Other Drives?
Jul 5, 2011
I have Fedora 14 installed on my main internal drive. I have one Fedora 14 and one Fedora 15 installed on two separate USB drives.When I boot into any of these drives, I can't access any of the other hard drives from the other drivesll I can, but just the boot partitions.Is there any way of mounting the other partitions so I can access the information?---------- Post added at 12:42 PM ---------- Previous post was at 09:34 AM ----------I guess even an explanation on why I can't view them would be good too.
I installed Linux on my Y drive, and all went well until I tried to boot into XP again. I can't access or install an operating system to my other three hard drives, C, X, and Z.I think that during the install my hard drives were changed to something other then NTFS, but Linux won't access them either.
When I use my Windows XP or Windows 7 disc, it says the drive has 0mb free, and it can't install until I delete the partition, then reformat. I don't want to do this obviously, because I don't want to format all of my data.When I go to Places > My Computer it lists my CD drive, Filesystem, and the Y drive. It doesn't show my other three hard drives.Under Palimpsest Disk Utility I can see my other three drives, but I can't access the data on them yet.
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
I have booted up from openSUSE 11.3 on a USB stick. When i go into Dolphin (the file explorer) and try to open a harddrive, I get the error:
Code: An error occurred while accessing 'MyHardDrive', the system responded: org.freedesktop.Hal.Device.PermissionDeniedByPolicy: org.freedesktop.hal.storage.mount-fixed auth_admin_keep_always <-- (action,result)
last night i install ubuntu on my pc while installin there was a windows reporting a grub installing error it saids " grub cant be installed here" and it gives me the choice to select where to install it,the problem is that i have 2 HD one of 80GB and the other is 160GB in the 80 gb HD i have windows xp already isntalled and in the 160 GB HD i was installing ubuntu, i choose to install the grub in the 160 GB HD and now when i turn on my pc i can only access ubuntu is like if my HD with Win xp just disappear
I'm working on creating a bootable Linux CD to distribute a sandbox environment to customers that will work on multiple PCs.One requirement of this environment is that we do not want the user to have any access to the underlying hard drives in the computer to prevent any accidental and/or malicious damage. I can prevent the disks from automounting with a few udev custom rules, but is there any way to prevent/block the user from manually mounting the hard drives after boot up.
I suspect this is not new but I just can't find where it was treated. Maybe someone can give me a good lead.I just want to prevent certain users from accessing CD/DVD drives and all external drives. They should be able to mount their home directories and move around within the OS but they shouldn't be able to move data away from the PC. Any Clues?
I have a fresh installation of Fedora 11 and I am having a hard time figuring out how to automount my storage drives. Each time I login, I try to access my various storage drives and gnome makes me authenticate asroot before mounting it. FSTAB lists only logical volumes but not my storage drives. What can I do to make sure these automount when I login?
I have two identical 73 GB Scsi ulta320 scsi drives, Fedora web server is install on one drive with all web files and etc. I wish to make an exact clone of the drive that will boot and run everything as the current drive does now. Is there a download of a program I could download or purchase that would boot and make an exact clone to do the above.
I am trying to figure out how to get the UUID for some of my external hard drives.the internet revealed a couple of promising leads, this is what I have tried so far:
ls -l /dev/disk/by-uuid -> didn't list the hard discs blkid -> didn't list the hard discs lsusb -v -> listed the hard disc but no uuid
A normally formatted usb key is listed with uuid. The external hard discs are fully encrypted by truecrypt(realcrypt). I have been reading not so great things about that itself, but for now I don't have a promising alternative that I can use with windows as well.Any google searches don't seem to cast any new light on this for me,I'd be open to suggestions if there's a better way to get a definite ID for a hard drive... I just need to be able to mount it with realcrypt
I ran smartctl and it says "unrecognized" hard drive. I recognized the hard drive the first time I ran the test. But now, it's unrecognized? How come? I don't know how or why this happens. How can I get rid of this problem and the annoying icon?
I tried to run Windows, then started getting errors? This is strange: I've been running Linux the whole day. I last used the Windows hard drive was 2 days ago. Now its screwing up. Window is screwing up the hard drive? HDD has too many bad sectors. didn't touch it, I didn't do much on it, etc? I do not understand?!
I have a Centos 5.5 system with 2* 250 gig sata physical drives, sda and sdb. Each drive has a linux raid boot partition and a Linux raid LVM partition. Both pairs of partitions are set up with raid 1 mirroring. I want to add more data capacity - and I propose to add a second pair of physical drives - this time 1.5 terabyte drives presumably sdc and sdd. I assume I can just plug in the new hardware - reboot the system and set up the new partitions, raid arrays and LVMs on the live system. My first question:
1) Is there any danger - that adding these drives to arbitrary sata ports on the motherboard will cause the re-enumeration of the "sdx" series in such a way that the system will get confused about where to find the existing raid components and/or the boot or root file-systems? If anyone can point me to a tutorial on how the enumeration of the "sdx" sequence works and how the system finds the raid arrays and root file-system at boot time
2) I intend to use the majority of the new raid array as an LVM "Data Volume" to isolate "data" from "system" files for backup and maintenance purposes. Is there any merit in creating "alternate" boot partitions and "alternate" root file-systems on the new drives so that the system can be backed up there periodically? The intent here is to boot from the newer partition in the event of a corruption or other failure of the current boot or root file-system. If this is a good idea - how would the system know where to find the root file-system if the original one gets corrupted. i.e. At boot time - how does the system know what root file-system to use and where to find it?
3) If I create new LVM /raid partitions on the new drives - should the new LVM be part of the same "volgroup" - or would it be better to make it a separate "volgroup"? What are the issues to consider in making that decision?
I've installed Fedora 10 short time after it came out. Now I am having some problems unmounting thes drives on restart or shutdown. It hangs at the stage of 'unmounting file system'. I've looked into this matter and discovered that those drives are automatically mounted and shown on the Gnome file browser. As the /etc/fstab indicates, it is not mounted by it. I must have done something to have all the hard drives shown in the file browser and now Fedora seems to be unable to unmount them.
# # /etc/fstab # Created by anaconda on Mon Sep 7 20:25:11 2009 # # Accessible filesystems, by reference, are maintained under '/dev/disk'
First time linux user, am trying to install a fresh full install of Fedora 12 dvd i686 version. I have two identical sata drives, which fedora fails to identify. Have reset the bios, changed settings in the bios, still not finding them. I have an asus av8-x motherboard, with a athlon dual core processer.
The Fedora installer won't display my two SATA hard drives. I've tried both the x86_64 live CD and DVD. On the live CD, fdisk -l displayed nothing. However, if I click "Specialized Storage Devices" a devices shows up as "BIOS RAID set (stripe)" with a capacity equal to both my hard drives. I don't even have RAID enabled in BIOS - it is set to AHCI. Other os installers display the hard drive correctly.
Specs: 2x 640GB western digital caviar blacks ASUS M4A78T-E 790GX motherboard
I am currently trying to configure a set of hard drives as a RAID configuration. My system is running with Red Hat Enterprise Linux Client release 5.1 as the base OS. I am booting from CD. I am trying to image a set of drives that have not been imaged before. When the GUI dialog window for disk setup is displayed, it shows a default disk layout including a LVM slice. In the disk layout is a /boot partition already. It is not what I would like so I edit it to be the size for my system and make it the primary partition. I also select it to be a software RAID. I then add three more partitions for my drive 'A' all of type software RAID and NOT primary partitions.
At this point my drives have the correct number of partitions except for showing the LVM slice. I select 'RAID' again, followed by selecting 'Clone a drive to create a RAID device ...' followed by 'OK'. I then get a dialog to select the source and target. i select my drive 'A' to be the source and 'B' to be the target followed by 'OK'. An error dialog is received stating that all the partitions are not of type software RAID. The disk partitions are all type software RAID except the extended LVM slice. I can not get past this point and I am following a procedure written some time ago by a person that is not available.
I recently installed Fedora 11 x86_64 (dual boot with XP) and am having difficulty finding two of my three hard drives to mount them. This is my setup: 80 GB Hard drive (boot drive) with two partitions, one for XP (NTFS) and one for F11 (ext4). 2x250 GB Hard drives, one is formatted with NTFS, the other one has yet to be formatted (my plan is to use ext4).
All of my drives are SATA, on the same nVidia controller. After the install, I can see only the 80 GB hard drive (both partitions). What do I need to do to find the other two drives? During the install, it called the partitions /dev/sda0, sda1, sda2 and sda3, but I no longer see these drives. If I knew where the drives were I could mount them, but my systems just isn't seeing the drives.
This is the output of df:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_user-lv_root
My sister-in-law (SIL) has a Western Digital ShareSpace 4TB network access storage that was setup in a RAID 5 configuration (four 1TB hard drives). To make a long story short, a blinking red light on the device and a call to Tier 2 technical support, one of the guys mentioned using Fedora as a way to retrieve the data from the hard drives (seems like the hard drives are good but maybe the actual NAS device crapped out).
I will not have access to the hard drives till this weekend and of course Tier 2 technical support is closed on the weekends. I have almost no Linux knowledge but can follow instructions pretty darn well. I am looking to install Fedora 14 Desktop Edition 64 bit on my desktop sometime tonight or tomorrow. Once I have Fedora installed, how would I mount the hard drives and have Fedora read the RAID 5 array?
I recently finished installing Fedora 9 on a Prolient ML 330 G6 Server, but i configured the SATA hard drives to be viewed as four seperate hard drives. I was asked to merge the drives to be seen as one 800GB hard drive, my biggest fear is that we had set up Samba to share folders between fedora 9 giving specific users access to specific files saved on the Prolient server, will those settings be lost.And could you call that a File Server or do you have to enter any more settings And also if anyone could point me to a tutorial on Logical Volume Management and Raid specifically for fedora 9
i'm tying to dual boot Vista64 (already installed) and Fedora 10 x86_64. I am running a Dell XPS 410 running 2 sata hard drives raid 0 (ICH8DH). I started the process by shrinking my C drive on disk0 leaving 64.45GB of unallocated space. Next I rebooted into Fedora install DVD and when i get to blue graphical install screen i get message asking if my drive is GPT and if it is it may be corrupted. I click NO, and it comes up with a message telling me i have to initialize my drive if i want to use it ( have to click NO twice) and if i do it i will lose all my data.
i can click no and keep proceding through the install until i get to the partition setup screen. No hard drives or partitions are shown. I've tried googling the problem and get bits of pieces of information scattered in different parts but nothing conclusive to my problem i think. As far as my background of knowledge goes, I'm new to the linux community but give me a thorough guide and i'll do fine (i hope). I've been using fedora on a separate laptop for 2 days now .
I currently have a clean installation of Windows on the Primary drive (13gbs) and I want to install Fedora 11 on the other (7gbs). Should I install grub on the Windows hard drive or will Windows hate on me for that? Earlier I tried installing grub on the slave with the Fedora system, but I had trouble configuring Grub in a way that it would understand and I ended up messing up the MBRs of both hard drives.
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
I would like to install Fedora 11 on an ASUS P5L-VM 1394 motherboard with a 3 GHz Pentium 4 CPU. This is an LGA775 socket mobo with a Intel 945G chipset. Two SATA hard drives are plugged into SATA ports. An IDE DVD drive is plugged into the IDE/ATA port. Using the 32 bit Fedora 11 installation disk, I have seen two cases:
1) No hard drive recognized. When i get to the disk configuration screen, there are no options to choose from.
2) By monkeying around with the BIOS settings or switching the SATA ports the disks are connected to, I can get an alternative mode in which no drivers are found for the DVD drive either.
Currently, a version of Ubuntu is installed. UPDATE: The board was purchased in a P3-PH4C barebones, which for unknown reasons requires a different BIOS issue than the regular P5L-VM 1394. Updating to the most recent BIOS does not resolve the problem. One the installation procedure fails to recognize the hard drives, going into a shell and examining the boot up log shows that the kernel recognized both hard drives. So it's down to why the installation procedure is not recognizing them.
I am writing as yesterday, my fourth hard drive within 2 years crashed. Is that normal? One was crashing 2 years ago, one in winter 2009 and 2 just within 2 weeks. What can be the reason for so many crashes? I heard maybe the power supply? How can I find out if that's broken? The voltages at least in BIOS seem normal. The SATA controller? How do I know if its broken? Can I just but one PCI-E card with SATA adapters? Is it the motherboard? Theres not much more in my computer... As well, its wired that my good-old 160 GB drive never crashed, only constantly the bigger ones. Here some typical error code from mount and dmesg:
Code: mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
I have a SATA drive that worked fine. Then I installed two more hard drives into my system. When these hard drives are installed, if I try to access the SATA drive in Linux, it will start lightly clicking and then the drive will become unavailable. If I power on the machine without the other two hard drives then it works fine. What could be causing this to happen? I don't think it's heat because the two hard drives are far away from the SATA drive.