CentOS 5 Server :: ISCSI Volumes No Longer Mount After Reboot?
Sep 22, 2011
We have a Centos 5.6 server mounting two iSCSI volumes from an HP P2000 storage array. Multipathd is also running, and this has been working well for the few months we have been using it. The two volumes presented to the server were created in LVM, and worked without problem.We had a requirement to reboot the server, and now the iSCSI volumes will no longer mount. From what I can tell, the iSCSI connection is working ok, as I can see the correct sessions, and if I run 'fdisk -l' I can see the iSCSI block devices, but the O/S isn't seeing the filesystems. Any LVM command does not show the volumes at all, and 'vgchange -a y' only lists the local boot volume, not the iSCSI volumes. My concern is that, the output of 'fdisk -l' says 'Disk /dev/xxx doesn't contain a valid partition table' for all the iSCSI devices. Research shows that performing the vgchange -a y command should automatically mount any VG's that aren't showing, but it doesn't work.
There's a lot of data on these iSCSI volumes, and I'm no LVM expert. I've read that some have had problems where LVM starts before iSCSI and things get a bit messed up, but I don't know if this is the case here (I can't tell), but if there's a way of switching this round that might help, I'm prepared to give it a go.There was absolutely no indication there were any problems with these volumes, so corruption is highly unlikely.
I'm trying to make a fresh install of Ubuntu 10.10 Server 64bit but is stuck with the Partitioning Disk part and Ubuntu 10.04 LTS provides the same error. The installation wizard gives me a menu:
!! Partitioning disks This menu allows you to configure iSCSI volumes iSCSI configuration actions
I am trying to connect the one of server RHEL5.4 to the IBM iSCSI storage. Server is equipped with 2 single port Qlogic iSCSI HBA(TOE). RHEL detected the HBA and installed driver itself (qla3XXX). I have configured the HBA ip address in the range of iSCSI host port of storage. Both of the HBA is connecting to the two different controller of storage. I have discovered the storage using command iscsiadm -m discovery command for both of the controller and it went through fine. But problem is whenever server is restarting if both of the hba is connected to the storage then server will not detect the volumes which is mapped to the server and then to detect the volume I need to run "mppBusRescan" and "vgscan" command each time. If only one path is connected it is fine.
I inherited a 3ware 9550SX running a version of gentoo with a2.6.28.something kernel. I started over with CentOS 5.6 x86_64.tw_cli informs me that the 9-disk RAID 5 is healthy.The previous admin used lvm (?) to carve up the RAID into a zilliontiny pieces and one big piece. My main interest is the big piece.Some of the small pieces refused to mount until I installed theCentOS plus kernel (they are reiserfs).The remainder seem to be ext3; however, they are not mounted at boot("refusing activation"). lvs tells me they are not active. If I try tomake one active, for example:root> lvchange -ay vg01/usrI get:Refusing activation of partial LV usr. Use --partial to override.If I use --partial, I get:Partial mode. Incomplete logical volumes will be processed.and then I can then mount the partition, but not everything seems tobe there.
Some of the directory entries look like this:?--------- ? ? ? ? ? logfilesIs it possible that the versions of the kernel and lvm that wereon the gentoo system are causing grief for an older kernel (andpossibly older lvm) on CentOS 5.6 and that I might have greaterfortunes with CentOS 6.x ?Or am I missing something fundamental? This is my first experiencewith lvm, so it's more than a little probable.
I built a CentOS 5 Linux cluster with GFS storage using local RAID volume and share it with gnbd_export/import on two web-servers. Now I need to expand that storage to another servers local volume. I saw the picture in the manual, but I don`t know, how can I create that scheme.
I can use gnbd_export on the second server and gnbd_import on the first. In that case I will have two volumes on the first storage and I can expand volume group, logical volume, etc on it.
I have a fairly simple iSCSI setup using two devices, but they have swapped names on different machines. running CentOs 5.3 ia64, and using iscsi-initiator-utils-6.2.0.868-0.18.el5
vm1: [root@vm1 ~]# fdisk -l Disk /dev/xvda: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
[Code].....
Any way to get iSCSI to mount the devices as consistent device names ?
I have servers installed with RHEL 4 2.6.9-89.0.9 ELsmp. I tried using uuid and label in /etc/fstab to automount usb drives to mountpoints that I specify after reboot. Unfortunately, it just does not work in all my RHEL4 servers. After every reboot, /etc/fstab will be automatically modified and all configurations related to my USB drives will be changed. Irregardless of whether i use UUID or LABEL in my /etc/fstab.However, it works on RHEL5. But, upgrading is not an option in my environment. I have been googling around looking for alternatives but everything seems to point back to using UUID or LABEL in /etc/fstab. Anyone has tried something that works? Please help me, thank you.
I have an iSCSI target on my Ubuntu 10.04 LTS server that is giving me access to 3 x 2TB volumes. I am using LVM to create 1 big volume and using it as a place for my backup software to write to. Upon initial setup I tested to see if the connection would still be there after a reboot, and it was. After going through the trouble of configuring my backup software (ZManda) and doing a few very large backups I had to reboot my SAN (OpenFiler) for hardware reasons. I took my server down while performing the maintenance and brought it back up only after the SAN work was done. Now, the LVM volume is listed as "not found" when booting.
Using iscsiadm I can see the target but LVM doesn't know of any volumes that exist using those targets. I need to get it back up and running and then troubleshoot the reboot issue.
I am currently wanting to connect to an NAS unit that I have. I have installed open-iSCSI and have made the following changes to the iscsi_conf file:
Quote:
# To set a CHAP username and password for initiator # authentication by the target(s), uncomment the following lines: node.session.auth.username = "user" node.session.auth.password = "12 digit pass"
[Code].....
My goal is to get the iSCSI initiator to connect after a reboot. Mounting after connecting I am sure this will be another challenge.
what the maximum number of logical volumes is for a volume group in LVM ? Is there any known performance hit for creating a large number of small logical volumes vs a small number of large volumes ?
I've had everything but /boot on LVM LUKS encryption since I installed 11.4 on my netbook. Suddenly it won't accept my password and boot. Nothing had been updated since the last successful boot. The only possibly different thing that occurred was that I had plugged in my Android phone to charge before it booted up. Anyway, the specific error it gives when I enter the password (and I'm absolutely sure it's the correct password):
Code: No key available with this passphrase. Here is everything else on the screen: Code: doing fast boot Creating device nodes with udev [number (not sure if relevant/unique)] fb:conflicting fb hw usage inteldrmfb vs VESA VGA - removing gen Volume group "system" not found
want to create a iSCSI connection which mounts /home directory to a share on my NAS via iSCSI. Does anyone know if this is possible on a RHEL 5.4 machine? I am building the server from scratch and then creating the iSCSI mount point in /etc/fstab. After the /home directory is mounted on the mail server, I will copy all the mailboxes over to the /home directory via iSCSI.
The first server I installed installed fine. The second server, installed with the same config, went to "kernel panic not syncing no init found try passing init= option in kernel" error. I tried reinstalling but it keeps going to that error after install reboot. The storage is ISCSI connected via Intel Server Adapter, which allows it to boot from ISCSI. Not sure if that's the cause for the problem, but the first server is connected to the same ISCSI and installed just fine.
Is there a way that I can make sure ISCSI module installs during installation? Although I think it is installed since it's able to copy the files and setup /dev/sda. I just wana make sure that it installs during setup.
I have never done this before but stood up a new Debian (testing) x64 system. It only has 146GB available for RAID1 so I created a 500GB iSCSI LUN on my NAS device on the network and am really confused how I can attach my Debian to the iSCSI LUN I created. Right now the OS is installed all locally on the machine but I would like the iSCSI LUN to be the /home directory for mail storage. Is this possible or do I need to mount the LUN to a newly created folder / mount point that is locally attached?
I have 9.10 and notice that when I look in Places none of my volumes/partitions are mounted - if I click on them I have to enter my user password to authenticate to gain access. My problem is that (with some help) I have set up rsync so it runs when I shut down my PC and backs up my Home folder from a partition on sda to a partition on sdb - this is great but sometimes it works and sometimes it doesn't.
I have done some tests and discovered that if I use my PC and never manually mount my backup sdb partition the rsync does not work (I also have GAdmin-rysnc so I can run manually backup but this also will not run if I do no mount the sdb volume). However, if I do mount the sdb backup partition and close down/restart then the backup works. What I need is my sdb backup partition to be automatically mounted every time I switch on - can this be done? I'm sure I had this working in 9.04 (auto mounting) but 9.10 seems not to like it.
I got a new computer today, a Compaq Presario CQ56, with Windows 7 preinstalled, and I want to dualboot with Ubuntu, but I'm having some problems. I'll explain what I've done so far, in case I did something wrong.
When I popped in my 10.10 Live CD, I was surprised to see there was not "install alongside existing operating system", which I thought I remembered from the last time I installed Ubuntu, but I chose manual partitioning and continued. I'd never done this before, so I quickly learned I had to first create a (couple) partition(s).
After freeing up a large amount of space and rebooting twice as recommended, I booted off the Live CD and opened GParted, intending to create a Ubuntu system partition, one for swap, and one for my home folder. When I right-clicked on my unallocated space, I got the error message "It is not possible to create more than 4 primary partitions". After a little googling, I found I had to eliminate one of my primary partitions and create a new extended partition, which I could then partition further. Noticing I had a partition that didn't seem important, HP_TOOLS, I googled and found this.
When I inserted my flash drive, it didn't automount. When I tried to mount by right-clicking and clicking "Mount", nothing happened. I also cannot mount the HP_TOOLS partition, nor any other. They don't mount when I click on them in the Places menu, and they don't mount when I right-click and choose Mount.
My main question is "Help! How do I install Ubuntu?", but I think if I can create these partitions, which requires mounting the drives (I think), I can figure the rest out.
A screenshot of my GParted window is attached, if that helps. (I couldn't see how to insert it without a URL)
I have a media server running OES2 on SLES 10.x after the network admin applied a patch to the system the machine will no longer mount the NSS volumes. I am relatively experienced with using linux, but I primarily use red hat and fedora so all these NOvell tools are a bit foriegn to me. I need to get the data off the volumes and restore the drives (ext3) asap. I dont want to screw around with eDirectory or OES2 unless I have to.
After applying Ubuntu updates i am now unable to mount volumes it tells me that i am denied.I have checked my user permissions and i am allowed to mount. I have also downgraded MountAll to no avail.
I have one server with Jboss and Tomcat installed, I have to start these servers manually everytime I do reboot the server.How I could do to start Jboss and Tomcat automatically, when I do reboot the server?
I seem to get logged into the NetApp filer fine - it shows the session, but I don't get a device. And don't panic - the system is CentOS 5.4, but the system is named OpenVMS (don't ask . . . ) iscsiadm shows me logged into the NetApp and the session is up.
One of my mailservers running postfix has suddenly stopped sending mail and has been generating the following errors:
Jan 7 12:03:08 postfix/sendmail[3560]: warning: premature end-of-input on /usr/sbin/postdrop -r while reading input attribute name Jan 7 12:03:08 postfix/sendmail[3560]: fatal: root(0): unable to execute /usr/sbin/postdrop -r: Success
[Code].....
Things I have tried to fix this problem, but didn't work.
1) Stopped postfix, uninstalled and reinstalled.
2) Did a complete filesystem relabel with a touch /.autorelabel and reboot.
3) Did a restorecon -F -R on /etc/postfix, /var/spool/postfix and /usr/sbin/post*
Nothing above has worked and have no idea why with selinux disabled postfix works and with it on it fails.