After an install of suse 11.4, one of my drives raid 0 (ichr9 intel) does not mount and is not recognized as being formatted in NTSF, while the other unit raid 0 (ichr9) is recognized without problems?
I'm having some problems with a hosted openSUSE 11.2 server. It was running fine until I did a "zypper up" to apply patches. This included a kernel update.
On reboot the root partition does not mount the / partition giving the following error:
Unrecognized mount option "defaults.noatime", or missing value mount: wrong fs type, bad option, bad superblock on /dev/md2.
Through an Ubuntu rescue disk (this is what Hetzner provides) the disk can be mounted without problems.
( I installed a fresh openSUSE 11.2 with a similar configuration and got the same results after the update)
The server is a hosted installation from Hetzner in Germany with just the basics for LAMP setup.
The disk setup is as follows using software raid1: swap /dev/md0 (/dev/sda1 /dev/sdb1) /boot /dev/md1 (/dev/sda2 /dev/sdb2) / /dev/md2 (/dev/sda3 /dev/sdb3)
I've been trying on new installation for opensuse 11.3 on Intel P55 raid5 configured system, one non-raid disk for boot device, a raid5 array as data device. However, installation always gets hung on Searching Linux partition step and I was able to see the vgscan process was the last command to run. Is it a known bug? Does anyone have the same issue as mine?
I need to copy data from a single HD, which used to be part of a Linux RAID 1. I've googled around, but can't find any clue how to mount partitions from this single HD.
Background: The HD comes from a linux based NAS box Synology DS207+. The NAS uses ext3 as filesystem. Both NAS disks are fine, but the other NAS hardware is dead and not worth repairing or replacing.
I was looking to do a fresh install of 11.2 and use my home partition from 11.1. During the Gnome Live version I wanted to see how suse would configure my computer. It recognized everything fine, except it didn't show my current home partition which is ext 3. Because Opensuse 11.2 has switched to ext 4 as default for root and home? I was hoping to use my old home with 11.2. Is there any way to make the switch without losing my settings? During the live install the partitioner didn't use my current home partition, it was going to make a new one.
So I opened up the partitioner in yast to see why it didn't use my current home and it shows no mount point for my home ext 3. Would changing the mount point on my ext 3 partition to home make the 11.2 installer recognize this as my home to use? Or will I have to copy my current home. Paste it elsewhere. Delete old home. Use unallocated space as ext 4. Paste old home on new ext4 to have the 11.2 installer recognize this as my home. So, current home is ext 3. 11.2 installer wants to make a new home on ext4. How do I use my current home settings? I haven't installed yet just tried a live run.
I have 2 identical disks originally configured as a pair for a server. Each of the disks has 2 partitions dev/sdb1,dev/sdb2. The sdb1 partitions I had configured as a raid1 mirror. The sdb2 partitions were non-raid and used as extra misc. Space. Further, the raid setup is also encrypted using dm-crypt luks. Now I want to redeploy each of the disks for new purposes. One of the disks i want to deploy exactly as before (keeping the partitions and content), however without being part of a raid array.
I've successfully deployed this disk into a new system and I am mounting the dev/sdb1 partition as dev/md0 because the disk is set to autodetect raid. Actually I am using cryptsetup and mounting with mapper. Can I get rid of the setting for auto detect on this partition without losing the data, or breaking the encryption? I just want to mount the partition as a standalone encrypted disk. Is it as simple as doing crypt setup luksOpen /dev/sdb1 then mounting it with mapper? Or do I need to change the partition in some way. Or do I simply continue to operate it as a 'broken' raid array?
This is strange. I moved OS 11.1 from an old 150 GB PATA drive over to a 500 GB SATA using Parted Magic. The old and new partitions were
Code: OLD: /dev/sda1 - 19.99 GB, mounted as / (root partition) /dev/sda2 - 97.82 GB, mounted as /home /dev/sdb1 - 29.52 GB, Windows XP NEW: /dev/sda1 - 29.30 GB, mounted as / /dev/sda2 -292.97 GB, mounted as /home /dev/sda3 - 45.82 GB, Windows XP
I used the "Clonezilla" tool on the Parted Magic live CD to move and resize the partitions. To my delight, everything appeared to transfer just fine. I can boot into OpenSUSE 11.1 (though not into Windows, but that's not really important; I'll figure that out later), but my /home partition won't mount. I'm set to autologin, and I get the expected error: "can't access /home/stephen" (or something like that). Here's the weird thing. I can ALT-F3, get a terminal and manually "mount /dev/sda2 /home", go back to ATL-F7 and log right in, so I know the disk is fine. (I've already 'fsck'd everything, by the way, and they're clean.)
I've used Yast's partitioner about a dozen times, trying "device by ID" and other settings. I always get the same thing when I reboot. On this last reboot, when it refused to log into /home, I ALT-F3'd, logged in as root, did a "cat" on "/etc/fstab" and entered the device-by-id line exactly as I saw it there and it mounted the /home directory just fine! ALT-F7, logged into KDE. I'm typing this in KDE now. Works fine. I so rarely need to reboot this machine that I can manually mount the /home partition, if need be, but (obviously) I'd like it to be mounted automatically during the boot.
I don't see anything obviously wrong here. The fact that I can take that second line and do a manual "mount" shows me that the device ID is at least correct. Just to be clear, here's what I entered in virtual terminal 3 as root to get my home partition to mount: Code: mount /dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEA534RV0DJ4LA-part2 /home and it worked fine. Exact same line.
I installed Mac OS X 10.6, Windows 7 Ultimate, and made 4 partitions.OSX and 7 installed fine, but when I tried to install SUSE, it stopped at 92%.I get this error:
I need some assistance mount a UFS2 partition as read and write. if its not possible, then I may have to copy a few hundred GBs of data. Currently using the command: Code: mount -r -t ufs -o ufstype=UFS2 /dev/sdb /Data Thats just read only.
When I installed OpenSuse 11.2 it mounted I configured to mount all of my windows/NTFS partition. However, one problem is that only root can write to it. I was trying to change it to '777' permission. However, as root I can't change permission. chmod doesn't work and neither does using nautilus (as root) work.I even tried unmounting it and then doing a chmod. That didn't work either.
what now trying to mount partition get this error this is the partition ubuntu 9.10 is installed on and upon reboot error no device with a long string. mount: can't find /dev/sda6/mnt in /etc/fstab or /etc/mtab
so now that I believe I've successfully mounted the partition how do I direct the bootloader to this partition /dev/sda6 on /media/11076e45-e27d-470b-bb6d-6894f7809a0c type ext4 (rw,nosuid,nodev,uhelper=devkit)
trying to update a 11.1 installation. As I start the installation I get a message "openSuse 11.0 partition /dev/sda5 Change the mount-by to any other method for all partitions." Also the message said go back and reboot and make the change. What is this all about, how do I do this.
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
I currently have a simple bash script set up via cron to backup my data (rsync) to an internal hard drive at regular intervals. I leave this "backup" hard drive unmounted, and it is mounted and unmounted as needed with the bash script. If I were to encrypt this "backup" drive (via Luks, or some other means), is there a way to get my backup script to work without me having to be there to enter a password?
How to mount vfat partition automatically after boot? After login it it will mount all vfat partition and the icon of those parition will be at desktop. How can it be done. udisks is installed. If i click a vfat partition from pcmanfm it prompts for password to mount.I don't want to click. It will be automatically mounted and i will get the icon of that mounted vfat partition at desktop
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
I cannot install 11.3 on a machine with an intel raid controller I have tried with raid 1 using the card and then setting the disks to individual raid 0 and letting suse raid them. With the card doing it the machines crashes as soon as it tries to boot the first time, with suse doing the raid I just get 'GRUB' on the screen It seems a lot of people are having similar problems, does any one have any pointers. 11.2 installs fine. I would try and do a bug report but every time I go to the page it's in Czech
basically i am able to connect to my other windows machine using rdesktop I want to be able to mount the window machine's partition (particularly my media folder cusermyaccountVideo) i know rdesktop ipaddr -r disk: blah blah blah mounts the partition onto the local machine however, i cant figure out what the specific commands are i tried followings but with no luck
I was installing opensuse 11.2 in parallel with windows xp.but during installation suddenly power has gone and after that opensuse is giving me the error message corrupt partition.i am also not able to login in xp. so I decide to reinstall windows, I got the error saying "invalid partition table" after the first restart of windows xp installation.
I tried to use windows system recovery console and committing fixmbr and fixboot commands, but didn't work. i have 2 window partition(1 for windows and 1 for data).i do,nt want to format 2,nd partition.
How can I installed windows?My plan was first to install windows xp, then opensuse again.
I have a dual-boot setup with winXP and openSUSE 11.2. I have both XP and SUSE partitions on a 160g HDD and then a Hardware RAID 1 array of 2 320g HDDs. The RAID arrray contains all my media/data files on an NTFS partition. For some reason SUSE shows both individual 320g disks mounted in the file system, but not the RAID array. If I attempt to browse either of the disks, I get an error and can't view them. How do I mount the RAID NTFS partition?
I started withsda1 windows restore sda3 extendedsda5 swapsda6 /mandrivasda7 /SUSE 11.3 sda8 /SUSE 11.2I then made some changes with gparted (from PartedMagic 5.5) to create an ntfs partition to simulate a condition where someone may want to delete that partition and use the free space for linux. I then deleted that partition, sda2 then sda5 (swap) and taking some screenshots, went about resizing partitions to use that free space and then recreate swap. the intention being to create a basic guide on how to go about this.I have previously only had my swap at the end of the extended partition, deleting itand recreating it later had caused little trouble.I realize that a resize/move operation would have been a better choice.What I was not expecting was the partition number changes that occurred.
Code: root@PartedMagic:~# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes
The default partition manager which OpenSUSE DVD 11.4 uses (Expert Partitioner) is not creating any logic partition with / mount because another system is already using it, is there anyway to fix this?
I have got a mid-aged server which i upgraded with a simple SATA-Raid-Controller with a VT6421A chipset.I attached two Samsung 750 GB Hard disks and created directly after the POSt screen a nice Raid 1 array. Ubuntu will recognize it as well as windows (which I would never ever use... ;-)).SuSE 11.1 (we need this OS for confirmity) will simply just recognize both disks in the partitions overview. The point "RAID" remains empty.
Are there any hints out there how I can enable the whole raid stuff in open suse? Do I need to integrate other drivers / modules to get thinks working?
A friend of mine gave me an old Dell PowerEdge SC 1430 Tower Server. It has three hard-drives in it, that are about 160 GB each. The system had a Promise FastTrak TX1430 Raid card in it, and I am not sure. Would that be a hardware RAID, or would you consider that a BIOS Raid?In any event, have the very latest OpenSuse 11.2 install DVD.I can boot from the DVD, do a fresh install and the system starts where I can sign on as the user. When I look at the starting parition, it has /dev/sda /dev/sdb and /dev/sdc ... each being a hard drive. These are just listed under the hard-drives, not under RAID.
The problem is that when I cold shut-down the machine, and restart from a power-up, the system comes up and cannot find the boot disk. So, Obviously I am doing something wrong.I am used to having 1 or 2 drives in a machine, just SATA drives, and no raid at all. I have no experience installing Linux on a machine with RAID, again not sure if this is BIOS or Hardware RAID.
Now that I set up a 4x750GB OnBoard RAID 10, I'd like to install my 11.2 Suse (w/ GRUB dual boot)from BIOS, I've partitioned my HDD so I have:- Array1 = 30GB HTFS, 30GB XFS, 30GB Solaris, then extended part. for all (each OS) swap, temp, etc ...- Array2 = 30 GB HTFS (DMy Documents); 30 GB (/home); 30 GB (unformated) and then a big part for game install and VMware partition (Win98 for my old games, winXP32, etc ...)As a (sad) matter of fact, instal WinXP64 and/or Vista64 works perfectly. I see the array as partitioned by BIOS, and everything works.
But I can't have my 11.2 (64) SUSE installed on the drive, I have -for now- a VMware for it running ... on Windows :s(while I'd prefere the opposit)When I start w/ Suse11.2 DVD (downloaded), it says it can't install as there is NO HDD !!ok, fine, I plug back a PATA HDD, then I start the install, then I move the partition to the SATABUT, even so, I can't start the Suse ...I then lauch the repare/recovery on the install DVD, but it says that there is no root partition, no SUSE partition to fixe