I seem to get logged into the NetApp filer fine - it shows the session, but I don't get a device. And don't panic - the system is CentOS 5.4, but the system is named OpenVMS (don't ask . . . ) iscsiadm shows me logged into the NetApp and the session is up.
The first server I installed installed fine. The second server, installed with the same config, went to "kernel panic not syncing no init found try passing init= option in kernel" error. I tried reinstalling but it keeps going to that error after install reboot. The storage is ISCSI connected via Intel Server Adapter, which allows it to boot from ISCSI. Not sure if that's the cause for the problem, but the first server is connected to the same ISCSI and installed just fine.
Is there a way that I can make sure ISCSI module installs during installation? Although I think it is installed since it's able to copy the files and setup /dev/sda. I just wana make sure that it installs during setup.
I have never done this before but stood up a new Debian (testing) x64 system. It only has 146GB available for RAID1 so I created a 500GB iSCSI LUN on my NAS device on the network and am really confused how I can attach my Debian to the iSCSI LUN I created. Right now the OS is installed all locally on the machine but I would like the iSCSI LUN to be the /home directory for mail storage. Is this possible or do I need to mount the LUN to a newly created folder / mount point that is locally attached?
I've been trying to figure out how to set up a partition on my CentOS serve r as an iscsi target, so I can access it from another CentOS client.I've been reading the manuals and various pages on the web, and nothing is very clear. I just want to be able to create a partition on my server, define it as a target, and then have my client initiator mount it.
i have installed centOS5. i want to configure iscsitarget. i know that iscsi target is built-in in centOS5. how can i use iscsiadm utility to configure iscsi target. please describe in detailed.
1.) i want to make 4 drives (4Drives OR 4LUNs)in iscsi target each with 5GB space. 2.) please describe and highlight where we mention disk space(5GB)
I have an iSCSI device that is large, ~17TB usable, and I've created an XFS file system at the device level. I mount the device no problem and am able to touch and remove files without issue. I start my application and within about 5-10 minutes I start seeing the following entries appear in /var/log/messages
I built a CentOS 5 Linux cluster with GFS storage using local RAID volume and share it with gnbd_export/import on two web-servers. Now I need to expand that storage to another servers local volume. I saw the picture in the manual, but I don`t know, how can I create that scheme.
I can use gnbd_export on the second server and gnbd_import on the first. In that case I will have two volumes on the first storage and I can expand volume group, logical volume, etc on it.
I have a server wich is connected to an iSCSI storage and gets harddisks from this storage. Sometimes I have to add new disks to this server. Everytime I add a disk and make an /etc/init.d/iscsi restart on the server the disks don't have the same device name as before the iscsi restart.
It should be possible to gave the disks persistent names using udev rules. Now I tried to create different rules in "/etc/udev/rules.d/99-static-iscsi-names.rules" e.g.
# /dev/sdc KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s %p",RESULT=="360a98000503355344c4a576864467877" NAME=="sdc%n" In "/etc/rc.local" I added "/sbin/start_udev" and in "/etc/scsi_id.config" I added the line "vendor="NETAPP",model="LUN",options=-g"
I have a fairly simple iSCSI setup using two devices, but they have swapped names on different machines. running CentOs 5.3 ia64, and using iscsi-initiator-utils-6.2.0.868-0.18.el5
vm1: [root@vm1 ~]# fdisk -l Disk /dev/xvda: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
[Code].....
Any way to get iSCSI to mount the devices as consistent device names ?
I am trying to install iscsi-target on centos 5.4 i386. When I go into the iscsi-target directory and type make I get the following error: [root@cluster-storage iscsitarget-1.4.19]# make cc: /lib/modules/2.6.18-164.el5/build/include/linux/version.h: Datei oder Verzeichnis nicht gefunden cc: keine Eingabedateien /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -lt: unary operator expected make -C usr make[1]: Entering directory `/opt/iscsitarget-1.4.19/usr' ..... make: *** [kernel] Fehler 2
I'm trying to configure an ISCSI/DRBD high-availability cluster and I'd like to know what is the best option between OpenAIS and Heartbeat. I've seen they both are included in Centos Repos, yet OpenAIS requires installing 2 addition repos to install Pacemaker (EPEL and Clusterlab repos).
We have a Centos 5.6 server mounting two iSCSI volumes from an HP P2000 storage array. Multipathd is also running, and this has been working well for the few months we have been using it. The two volumes presented to the server were created in LVM, and worked without problem.We had a requirement to reboot the server, and now the iSCSI volumes will no longer mount. From what I can tell, the iSCSI connection is working ok, as I can see the correct sessions, and if I run 'fdisk -l' I can see the iSCSI block devices, but the O/S isn't seeing the filesystems. Any LVM command does not show the volumes at all, and 'vgchange -a y' only lists the local boot volume, not the iSCSI volumes. My concern is that, the output of 'fdisk -l' says 'Disk /dev/xxx doesn't contain a valid partition table' for all the iSCSI devices. Research shows that performing the vgchange -a y command should automatically mount any VG's that aren't showing, but it doesn't work.
There's a lot of data on these iSCSI volumes, and I'm no LVM expert. I've read that some have had problems where LVM starts before iSCSI and things get a bit messed up, but I don't know if this is the case here (I can't tell), but if there's a way of switching this round that might help, I'm prepared to give it a go.There was absolutely no indication there were any problems with these volumes, so corruption is highly unlikely.
I'm attempting to setup VNC in way that emulates our other OpenSUSE installs - where you can connect to servername:1 and be presented with a graphical login screen. From there, you login and have a full desktop session.
I followed some steps from this site (very easy): [URL] ... and it worked like a charm. I get the graphical login screen, I can login, I can open terminal, some apps, move windows around, no problem. But, here's the problem: SOME windows refuse to appear.
For example, if I try to open "Add/Remove Software", it first prompts me for the root password, but then all I get is a "busy" mouse cursor and an entry on the task bar at the bottom that says "Starting Add/Remove Software" for about 5 seconds and then it dies off. The same thing happens for "Software Updater" and for almost EVERY entry under the SYSTEM>ADMINISTRATION menu. Which, basically means I can't change any settings via the GUI, which is kind-of the point! =)
I can't find any logging that helps me diagnose this, the windows just never appear. The behavior is the same whether I login as a user or root from the login screen (but, obviously, the root user doesn't prompt me for the password prior to running the apps). I've tried lots of different tweaks on the configs mentioned in the link above, but nothing makes any difference.
Edit: Version is CentOS 5.4 x86_64 I performed a yum update + reboot, no changes.
I found two questionss when I installing CentOS 5.2 X86_64 and it made me puzzled。Wrongs are follows:1.Jan 9 20:55:36 Linux kernel: powernow-k8: vid trans failed, vid 0x8, curr 0xaJan 9 20:55:36 Linux kernel: powernow-k8: transition frequency failedThis wrong information always appearing in console, have tried BIOS default setup.2.Jan 15 21:31:14 Linux kernel: Buffer I/O error on device sdb2, logical block 2498412546This wrong information always appearing in starting time, altogether 8 times My system:AMD Opteron 270 x2 (2000Mhz,Socket 940,Dual-core)TYAN S2882-D (AMD 8131+8111 Chipset)Qimonda ECC Reg DDR400 2G x2CentOS 5.2 X86-64
I have an intermittent issue with with Samba. I can access my samba share with windows xp and vista using windows networking and even by mapping the share to a drive. The problem is that the files and folders disappear inadvertently and I can only access them again if I open the share again from the start with windows explorer. Selinux is disabled and the firewall ports for samba are open. The following software is installed:
I have an iSCSI target on my Ubuntu 10.04 LTS server that is giving me access to 3 x 2TB volumes. I am using LVM to create 1 big volume and using it as a place for my backup software to write to. Upon initial setup I tested to see if the connection would still be there after a reboot, and it was. After going through the trouble of configuring my backup software (ZManda) and doing a few very large backups I had to reboot my SAN (OpenFiler) for hardware reasons. I took my server down while performing the maintenance and brought it back up only after the SAN work was done. Now, the LVM volume is listed as "not found" when booting.
Using iscsiadm I can see the target but LVM doesn't know of any volumes that exist using those targets. I need to get it back up and running and then troubleshoot the reboot issue.
iscsi-client OS: SLES10SP3i586I configured this sles10sp3 box an iscsi-client via YaST (yast2 iscsi-client), but 'lsscsi' or 'fdisk -l' doesnt show any iscsi-disk
# rcopen-iscsi restart Closing all iSCSI connections: Logging out of session [sid: 1, target: iqn.2010-03.com.ibm:sn.135026430, portal: 192.168.0.1,3260]
I'm trying to increase the size of an iSCSI device. On the LUN side, the provision size has been expanded already. I expanded the size of this device from 15GB to 30GB.
want to create a iSCSI connection which mounts /home directory to a share on my NAS via iSCSI. Does anyone know if this is possible on a RHEL 5.4 machine? I am building the server from scratch and then creating the iSCSI mount point in /etc/fstab. After the /home directory is mounted on the mail server, I will copy all the mailboxes over to the /home directory via iSCSI.
I have just set up a high available San with 2 nodes which run on debian 7.
Replication and high disponibility works fine but i have some problem with iscsi.
I have create on 2 windows server 2012 1 iscsi iniator on each, my problem is when i create a file or a directory on one server, I have to put the hard drive offline then online and the change appear but it's not really useful.
So i would like to know if there is a way to automatically do this.
I'm running 10.04 64-bit diskless on ESXi, installed as a minimal virtual machine. I want this server to access an iSCSI drive. The machine can view the iSCSI shares with iscsiadm, and can even log into the drive. When I do an iSCSI login, this appears in /var/log/messages:
Aug 17 11:08:21 ubuntutest kernel: [1123295.329972] scsi4 : iSCSI Initiator over TCP/IP
So, it appears that open-iSCSI is working correctly. But no new /dev/sd* nodes appear, and nothing new appears in /dev/disk/by-path. I'd expect to see /dev/disk/by-path/ip-XXXXXXXXX. fdisk -l shows nothing but the boot drive. My guess is that the "minimal kernel" doesn't include some necessary module or driver.
I am currently wanting to connect to an NAS unit that I have. I have installed open-iSCSI and have made the following changes to the iscsi_conf file:
Quote:
# To set a CHAP username and password for initiator # authentication by the target(s), uncomment the following lines: node.session.auth.username = "user" node.session.auth.password = "12 digit pass"
[Code].....
My goal is to get the iSCSI initiator to connect after a reboot. Mounting after connecting I am sure this will be another challenge.
connecting to my QNAP iSCSI target from Ubuntu (10.04). I've followed the instructions in the QNAP manual, but I something goes wrong when I need to list the nodes. When I enter
On windows I installed the iscsi initiator which tells me my pc id. Then I take that id and put it in my storage which automatically gives me my extra disk. How can I find out the id assigned to my ubuntu pc?