Red Hat :: Mapping /proc/scsi/scsi Entries With Respective Device Names In /dev/ Directory
Apr 11, 2011
In my understanding, the way /proc/scsi/scsi gets populated, /proc/paritions also gets populated in the same fashion. i.e. the description for first entry of /proc/scsi/scsi can be seen in the first entry of /proc/partitions and same for rest.
So, With this assumption, in my project, I used to relate first entry of /proc/scsi/scsi with first entry of /proc/partitions to get its total size and same for all entries.
But, I observed some differences in following scenario, where
1) The first 4 entries in /proc/scsi/scsi are SAN luns attached to my system and for which the actual device names in /dev/ are sda,sdb,sdc and sdd.
2) The last 4 entries are the internal HDDs on same system. In /dev/, their respective device names are sde,sdf,sdg & sdh.
(Output attached at end of the thread)
But in /proc/partitions, the device order is different.
You can see their respective sizes in /proc/partition output as well.
So, my question is, in this particular scenario, I can't relate the first entry of /proc/scsi/scsi with first entry of /proc/partition. i.e. scsi0:00:00:00 is not /dev/sde, because it is actually /dev/sda.
It seems that my assumption is wrong in this scenario.
Is there any way or mechanism to figure out actual device name for an entry in /proc/scsi/scsi in /dev/ directory?
How can my application should relate /proc/scsi/scsi entries with their respective device names and sizes?
When I enter "cat /proc/scsi/scsi" I'm returned with "cat: /proc/scsi/scsi: No such file or directory". I've tried this on two different installs on two different machines.
Some of our workstations have LTO's attached and they seem to drop off every now and again, the only thing which picks them up again (besides a reboot) is the famous rescan-scsi-bus script from here
The thing is that I'd like non-root users to be able to run this script, which in turn needs root to /proc/scsi/scsi
i m facing same error in most of the HCL servers. the problem is that it throws error while booting and sometimes not throws error. the error is :-
Feb 13 13:17:25 fe13s kernel: Adapter 0: Bus A: The SCSI controller was reset due to SCSI BUS noise or an invalid signal. Check cables, termination, termpower, LVDS operation, etc.
Feb 13 13:17:30 fe13s kernel: Adapter 0: Bus B: The SCSI controller successfully recovered from a SCSI BUS issue. The issue may still be present on the BUS. Check cables, termination, termpower, LVDS operation, etc
Feb 13 13:29:15 fe13s kernel: Adapter 0: Bus B: The SCSI controller successfully recovered from a SCSI BUS issue. The issue may still be present on the BUS. Check cables, termination, termpower, LVDS operation, etc code....
Since May 12,2009. Our system lifekeeper has the error log "lifekeeper error: DEVICE FAILURE on SCSI device '/dev/add'", but it ran normally. Until last week, it failover to the standby server. The disk still running, the error still come out.
One of my servers contains two scsi enclosures. After hot removing a disk and hot adding another the new disk gets assigned a device file but the enclosure (in sysfs) doesn't want to see it.
How can I force the enclosure to recognise the hot added disk.
Situation in sysfs:
The link device is dead after hot removing the device and keeps vanished after hot adding the new device. The hot added device gets assigned a device file and it is reachable. Even
I am attaching a LTO-3 tape drivce into my RHEL5 linux machine. Every time i used to restart my machine to detect the tape drive is there any way to rescan the buses to detect the newly attached scsi devices. In solaris "devfsadm" and "iostat" is there. I need the same kind of thing in linux.
I created a new disk on our scsi san. I then ran the following command: echo "- - -" > /sys/class/scsi_host/host1/scan. did that command for each host. and in dmesg, it shows it found a device /dev/sdg. but when i do a fdisk -l. It never lists /dev/sdg. I just did this other day on another server and it worked fine like that. This is RH 4.8.
I am developing for a Linux based device for which the HOT PLUG option is deactivated. As part of optimizing the code, we also don't want to create device files for unused devices. We understand that both USB attached and fixec SCSI hard disks would create device files like /dev/sda,/dev/sda1 /dev/sdb, /dev/sdb1 etc. Is this understanding correct?
In the case of USB attached SCSI devices, would driver create this device file entry? How is it created? Can somebody please tell me how it is being created automatically. In case I attach a fixed SCSI hard disk before boot up(and create device file /dev/sda1), would USB SCSI device driver create device files starting from /dev/sdb, automatically.
Suppose during a script execution I am attaching one or two new disk having different vendor id to host machine, how do I know which disk corresponds to which file in /dev/ directory? i just want to perform some operation on those device from some script, how will i know which file in /dev directory correspond to which disk(having same size but different vendor id).
I've got the F13 LiveCD that I was able to boot and use using the "nomodeset" boot option. From the desktop I'm trying to perform an Install to Hard Drive. I've read the Install from LiveCD post regarding the creation of a /boot partition and a / root partition. I've tried creating them without the LVM group and with. But every time I appempt to install I get...
An error occurred mounting device proc as /proc: mount failed: (9, None). This is a fatal error and the install cannot continue.
Hardware is a Sager 8887 (P4, 3.06HT, 60GB HDD, Radeon 9000 graphics adapter)
I have a Linux system running on an older Sun V20z. The drives are mirrored in a software RAID1. The motherboard has interfaces for only IDE and SCSI. The system is old and is no longer able to handle the load we're putting on it. I also have a much newer Sun X4100. This system is presently unused and has a pair of SAS drives in it. The new server only has SAS and SATA connections on the motherboard, though. I'm trying to think of the best way to clone the V20z over the X4100. I don't mind breaking the mirror knowing I can re-establish it later. I prefer not to do a fresh OS install followed by a tape restore. I would much rather break the mirror and clone one of the SCSI drives the SAS drive. I do have a USB to SATA adaptor for migrating external drives. Anyone know if this will work with a SAS drive? Any pointers on the best way to migrate this? I'm thinking even if the cloning is successful, I'm going to have to much with GRUB to get it to boot from the SAS drive?
I installed OpenSuse 11.4 (x86_64) a couple of days ago.One of my Drives a PC-DVD RAM (Creative) is not working. This worked under 11.3.The SCSI drive is connected to a PCI/SCSI adapter (AHA-2904).The message I see at boot is: ata_id[443]HDIO_GET_IDENTITY FAILED /dev/sr1I also see the message ata_piix not found, and can not find an option in the kernel to provide this.his causes the system to wait a long time and slows down boot dramatically
Does Debian have any particular tools or nuances for installing new hardware? I saw some stuff on Ubuntu which is related... but you know.
The dmesg file shows that it's being recognized, but I don't think it's actually being used (i.e. there is no driver installed). This is what's in the dmesg file:
Unless someone has already compiled a driver for Debian Lenny for this hardware, I'm going to have to compile my own I guess. The driver package seems to come with something called mptlinux-4.00.43.00-src.tar.gz which I'm guessing can be compiled for any Linux, but looking at the instructions, it's pretty much beyond me how to get started. It talks about using kernel source to build a module and such and such.
I have installed an old HP Scanner on my Fedora 11 system, and it works great. The scanner is connected through a SCSI, and it gets recognized at boot time if the scanner is ON. However, if the scanner is OFF, I need to turn ON the scanner and reboot the workstation for the SCSI devise to be recognized. Is there a way that I can rescan the SCSI devise without rebooting?
I installed OpenSuse 11.4 (x86_64) a couple of days ago.One of my Drives a PC-DVD RAM (Creative) is not working. This worked under 11.3.The SCSI drive is connected to a PCI/SCSI adapter (AHA-2904).The message I see at boot is:ata_id[443]HDIO_GET_IDENTITY FAILED /dev/sr1I also see the message ata_piix not found, and can not find an option in the kernel to provide this.This causes the system to wait a long time and slows down boot dramatically
I try to use a usb pen drive. The usb pen drive show up under computer but not like a drive in geparted. When I look in the log file I find the drive Code: May 29 10:11:46 CQ60 kernel: [ 112.942602] scsi 6:0:0:0: Direct-Access USBest USB2FlashStorage 0.00 PQ: 0 ANSI: 2 I do not know what this mean but it looks like the drive show up like a scsi and not usb or? I need a clue to get it work like a normal usb device.
I'm trying to install the Xubuntu 10.04 i386 Alternative build on a Pentium 2 Compaq PC with a SCSI hard drive. I run the installer but when loading it chokes up trying to read/write the SCSI harddrive and fails to install.
Do I need SCSI drivers or are they included and my disk corrupted? Windoze 98 and NT4 work fine on this PC.
I see to be having a problem with the SCSI disk that I'm trying to install. The computer that it is attached to was Win XP and SCSI worked fine. When we changed the computer over to Fedora 10, SCSI was being read by the computer but is not accessible.
I have 3 WD External USB hard dives. 2 are the same and are rotated for backup purposes and my system assigns them both to /dev/sdb (one is only plugged in at a time), this is fine.The 3rd drive is used for a different purpose and may or may not be plugged in at the same time as one of the others.How can I make the 3rd drive be allocated to /dev/sdc all of the time whether one of the other drives is plugged in or not?
I have installed Ubuntu server 10.10 today and I am trying to mount the scsi cdrom drive in a Dell Poweredge 2850. I can see the device in dev listed as scd0, but when i try to mount it to /media/cdrom or /mnt/cdrom I get a long list of I/O errors: Buffer I/O error on device sr0.
I have a virtual machine with Solaris 10. On this virtual machine I have to configure an iSCSI initiator.
The problem is that i have no physical SCSI devices to connect to my LAN
Therefore I have to create a Virtual SCSI device on my host laptop and configure it as an iSCSI server (iSCSI target) and share this virtual disk on the LAN.
Need help on how the scsi and multipathing works in Linux. From the docs i have read, i understand that by the use of multipathing we can assign multiple paths to a SAN partition. If there is a problem then one path will failover to other path.However, i am not clear on how linux recognizes the SAN partitions using the multipath drivers. For e.g. I have a HP Proliant server on which we have the following mounts:
Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p3 59G 11G 46G 20% / /dev/cciss/c0d0p1 494M 27M 443M 6% /boot
I just installed centos 5 on a hp dl380 server and it has 2 72.8 scsi gig hard drives. The problem I am having is that only one hard drive is being recognized and it is not being recognized as a scsi. This is what I get from fdisk -l
Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes 255 heads, 63 sectors/track, 8854 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/cciss/c0d0p1 * 1 13 104391 83 Linux /dev/cciss/c0d0p2 14 8854 71015332+ 8e Linux LVM
as you can see, the system doesn't even see the second hard drive. How do I get both hard drives to be seen and how do I get them to be recognized as scsi?