General :: Share 1 Scsi Storage With 2 RHEL5 Servers ?
Dec 8, 2010
I have one scsi storage array, JetStor SATA 416S,split into 2 halves. each is RAID5 12TB. Is it possible to dedicate each half to a different rhel5 server. I have "lsi22320-r" ultra230 dual channel scsi adapter in each server.
The first server sees the 2 halfs however the second server doesn't on the first server:
Ive been asked to investigate presenting the same SAN LUN to two or more RHEL5 hosts. The hosts are providing independent applications so theyre not clusters from an application perspective. The shared storage location would be used as a common area for imports/exports. Were hoping to reduce file transfer times between the hosts by eliminating the need to copy the files between two storage locations. Some of our hosts run Advanced Server and some are standard. Is there a file system that I can use that will allow multi-host access without running advanced server with clustering services on all hosts?
We just setup a HP DL380 with CentOS 5, and ran all of the latest updates.I am trying to attach a Compaq array (no model number) that is SCSI attached. I can see the array from the bios and created a raid group on it from there. However, from LVM and lvscan, I don't see it at all. I checked dmesg and there are no errors.Also, interestingly /proc/scsi/scsi is empty.
My Goal: I'm migrating data from the OpenSUSE server to the 2008 Storage Server. The idea is that all the data will reside on the 2008 Storage Server in an NFS share which will be mounted to the OpenSUSE server in the same location in which the data sat in initially.
I'd like this to be completely transparent to everyday use -- all programs designed to interact from a few other servers / computers will still work without modification, and, more importantly, all file permissions can remain the same. As far as I've seen the SMB mounting doesn't full preserve the *nix file permissions. I'm hoping the NFS mount will, but please tell me if I'm wrong. SMB is so much more straight-forward and already working. I should also note speed is a giant plus for NFS.
The Players: Windows Server 2003 - The Active Directory server Windows Storage Server 2008 - The new file server OpenSUSE 10.3 - The workhorse server
I have no control over the operating system choice. I have no control over the AD server. I have limited, intermittent access to the person who does have full control of the AD server. I have full control over the file server and the workhorse server.
What I've done: The Storage Server is running Services for Network File System (NFS). It has a 6.5ish TB RAID partition configured for NFS and Samba/CIFS sharing which is used for the data storage. The Samba/CIFS share is mountable and accessible, but I haven't found a way to maintain proper user permissions on the files. When I connect the owner and group for every file is root:root, which led me to NFS in hopes this would fix the problem.
The AD server admin has installed Services for Unix, and we imported the OpenSUSE passwd and group file. Either this didn't work, or it take a bit more configuration. Tomorrow morning we're going to take a deeper look into the SFU configuration with the AD.
Right now I can mount the NFS share on the OpenSUSE under the root account:
Command results truncated to what I think is relevant info. Ask if you need more.
Code:
My Problems:
I'm hoping the "Permission denied" error is solely because the UID and GID mappings are incorrect. I have, however, tried to create an NFS share with full permissions to "Everyone" and anonymous access, and I get the same results.
Also I've run into another roadblock. Why do non-root accounts fail to execute the mount /mnt/datadir command?
Code:
AFAIK 1048 is not a restricted port, so regular users should have access to bind to the port. I also have a Gentoo machine where regular users are allowed to execute the command. The 'mount' command shows that the NFS share was mounted with the username executing the command too.
Server NFS Config:
Code:
The following are the settings on localhost
The above was written yesterday and not posted in hopes I could solve the problem this morning. I failed.
Update: The domain admin installed an NIS server on the AD server. I installed an NIS server on a Gentoo Linux box. They weren't able to communicate. The Linux box thought the Windows NIS service wasn't running and Windows couldn't find the Linux machine. NIS led to a dead-end right about there. I also tried to manually link the user accounts to UNIX UIDs via the AD user properties window to no avail.
I'm interested in buying a new hardware for my company. The old server (now 10 years old) should be replaced with a new one. Till now, I was looking on different hardware suppliers, boards and different other places. I found a Tyan board [URL]. The hardware spec is quite interesting and the board would fullfill our claims.
how both storage devices will be supported by Ubuntu or Debian??
Some of our workstations have LTO's attached and they seem to drop off every now and again, the only thing which picks them up again (besides a reboot) is the famous rescan-scsi-bus script from here
The thing is that I'd like non-root users to be able to run this script, which in turn needs root to /proc/scsi/scsi
I am trying to disable USB storage from servers.What I did is as following1> modprobe -vr usb_storage2> blacklist usb_storageIt is working fine. But root can again load the module into the kernel. [ modeprobe -v usb_storage ] want to restrict this also. My requirement is not even root can access the usb storage.
We have set up a dedicated server with Centos5.4(Kernel 2.6.18-164.el5PAE). In this we have mounted a external storage device above 1 Tb. And also we have shared this storage device in samba share.
Sometime when we access this large storage in windows machine the server hangs and we were pushed to restart the server. Past few days it often happens. How to break down this issue and is there some thing that we have to look in specific area to avoid this?
I have about 50 servers with at least 200 GB free on each host. I need close to 10 TB of continuous shared storage between almost half of the servers for a new app I building.
I was looking into buying a storage device but was wonder if I could get any use out of the free space on all the servers, grouping all of the free storage from the hosts into one reduduant storage array.
i m facing same error in most of the HCL servers. the problem is that it throws error while booting and sometimes not throws error. the error is :-
Feb 13 13:17:25 fe13s kernel: Adapter 0: Bus A: The SCSI controller was reset due to SCSI BUS noise or an invalid signal. Check cables, termination, termpower, LVDS operation, etc.
Feb 13 13:17:30 fe13s kernel: Adapter 0: Bus B: The SCSI controller successfully recovered from a SCSI BUS issue. The issue may still be present on the BUS. Check cables, termination, termpower, LVDS operation, etc
Feb 13 13:29:15 fe13s kernel: Adapter 0: Bus B: The SCSI controller successfully recovered from a SCSI BUS issue. The issue may still be present on the BUS. Check cables, termination, termpower, LVDS operation, etc code....
In my understanding, the way /proc/scsi/scsi gets populated, /proc/paritions also gets populated in the same fashion. i.e. the description for first entry of /proc/scsi/scsi can be seen in the first entry of /proc/partitions and same for rest.
So, With this assumption, in my project, I used to relate first entry of /proc/scsi/scsi with first entry of /proc/partitions to get its total size and same for all entries.
But, I observed some differences in following scenario, where
1) The first 4 entries in /proc/scsi/scsi are SAN luns attached to my system and for which the actual device names in /dev/ are sda,sdb,sdc and sdd.
2) The last 4 entries are the internal HDDs on same system. In /dev/, their respective device names are sde,sdf,sdg & sdh.
(Output attached at end of the thread)
But in /proc/partitions, the device order is different.
You can see their respective sizes in /proc/partition output as well.
So, my question is, in this particular scenario, I can't relate the first entry of /proc/scsi/scsi with first entry of /proc/partition. i.e. scsi0:00:00:00 is not /dev/sde, because it is actually /dev/sda.
It seems that my assumption is wrong in this scenario.
Is there any way or mechanism to figure out actual device name for an entry in /proc/scsi/scsi in /dev/ directory?
How can my application should relate /proc/scsi/scsi entries with their respective device names and sizes?
When I enter "cat /proc/scsi/scsi" I'm returned with "cat: /proc/scsi/scsi: No such file or directory". I've tried this on two different installs on two different machines.
I have a desktop with Ubuntu and I've set up Samba to share files with my Windows 7 laptop. I can access my home folder just fine except for my NTFS storage partitions on the desktop's HDD and my home folder's Downloads folder (which times out whenever I try and open it).
Is there an alternative way to share files between Linux and Windows 7?
I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":
We've recently created 3 new RHEL5 servers and added them to the existing workgroup which all our other 7 RHEL4 servers run on. (All servers have been added into DNS). We seem to be having a few issues pinging 3 of the RHEL5 servers.
Below is an example of our ping tests: WinsSrv1: Can ping all but RHELSrv1 WinsSrv2: Can ping all REHL Servers WinsSrv3: Can ping all but RHELSrv2 WinsSrv4: Can ping all but RHELSrv2 & RHELSrv4 WinsSrv5: Can ping all RHEL Servers
There is no real pattern as to what server we can ping. We can ping all RHEL4 servers with no issues. All REHL5 servers can ping themselves with no issues.
We have configured software based RAID5 with LVM on our RHEL5 servers. Please let us know if its good to configure software RAID on live environment servers. What can be the disadvantages of software RAID against hardware RAID.
I'm running Ubuntu 9.10 with Kvm. I've used a howto to configure my network. Seems to work fine, I've installed the Virtual Machine manager, when I go to create my Virtual Machine, I see the the image is automatically created in /var/lib/libvirt/images. I have a totally separate path for my images. How to I configure a different image directory
I wanna try to install ubuntu server on my poweredge 1955 with emc storage, but on the partitioning step it says: "the following partitions will be formatted: ...<here goes all the lvms in the storage>..." I already tried all the options in the partitioning step without success
I trying to create a Cluster Linux with two servers and one storage, I have mounted some filesystems in both servers, but, when I create a file on server (1), I don't see the file on server (2).
Is that a problem with the linux or storage configuration ?.
I'm planning on setting up a home file server. I was wondering what platform would be recommended for something like this. The server would be used mainly for media storage which would be shared between an HTPC and a couple desktops and laptops. I was thinking of just getting whatever motherboard had the most SATA headers on it (which currently seems to be something P55-based) and setting up a RAID5 fakeraid with some 1.5 or 2TB drives and the OS in RAID1 with whatever drives I have laying around. It there anything flawed with this approach? P55 boards with 10 SATA headers are currently upwards of $200, which is kind of pricey. Is there a more economical route that I should consider? Also, are there any known problems with setting up a fakeraid like this using certain motherboard's SATA controllers?
My 11.04 installation is running beautifully in VM on ESXi. I'm trying to add storage so I added the disks, assigned them in vmware to the vm, then tried to mount them when I received the error code...
my setup: HP Microserver, booting vmware from USB drive. Hardware RAID card (Adaptec RAID 2405), 2 x 250GB HDD in RAID1 (datastore1) with VM's, 2 x 2TB HDD in RAID1 (datastore2) - the storage I'm trying to add.
A search of the above errors yielded many results, all of them were different scenarios to mine.
Can ANYONE point me in the right direction on how to use storage on multiple servers as a single cluster?I thought storage cluster was for that but, after much googling, and even more help from here, I don't think that achieves my goal. My goal is to have multiple servers share a file system, to act as somewhat of a network raid, so if node-A goes down the files are available on other nodes, and hopefully so when the capacity of the nodes are reached I can add nodes to expand the "cluster".
I am going to install Oracle RAC on two Servers, With shared SAN storage (Servers and Storage is IBM) OS = RHEL 5u5 x64 bit
And we used multipathing mechanism and created multipathing devices. i.e. /dev/mapper/mpath1. Then I created raw device /dev/raw/raw1 of this /dev/mapper/mpath1 Block device as per pre-reqs for Oracle Cluster. Every thing looks good, But we faced the performance issue as under.
when we run command : #dd if=/dev/zero of=/dev/mapper/mpath1 bs=1024 count=1000 the writing rate is approx. 34 MB/s But If we run command #dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=1000 the writing rate is very slow like 253 KB/s
I need some advice or tips or maybe your own experiences about building a home data storage or NAS.Here's some thoughts / requirements I think it should have:It should expandable. I'll stick a couple of 1TB HDDs and a little later I'll stick some moreIt should easily integrated to both Ubuntu and Windows 7. Ideally it'll be an integrated part of the filesystem.I'm thinking some sort of RAID as a backing up my data. RAID 1 seems like a such a waste but then again, these days, HDDs are cheap.And when I do add more HDDs, I'd like them to appear as one big storage unit instead of separate drives.Any suggestions and tips on how to go about this is welcome. Questions are plenty: should I go with server hardware or is bigger ATX case and standard hardware enough? I'll need some pointers so keep 'em coming
I have collected a number of computers over the years, and now I would like to put them to good use. I considered UEC, but many do not support hardware virtualization and all I really need is storage. Over all the machines, I estimate that I have 4-5 terabytes of storage, all going to waste because each one has relatively low storage space. Is there any way I could setup a redundant storage solution that utilities these machines in a networked system?
I run Debian on my old computer to use it as a server. Everything is configured properly so that it functions as a web server. Now that summer comes closer I will not be home most of the time and I was thinking to use part of my server to upload/download files. Is there some nice package that provides an easy interface for such a task? I am reffering to something like the wikimedia package but for just downloading/uploading files.
I'm trying to delete directories (long story, Mac temp files there, Windows not cooperating) on a sever connected to a HP 20 Modular Smart Array set up as RAID5. System currently running Windows. I've booted from a 9.10 LiveCD but can't see the external drives. Is it correct that I need to install mdadm to "see" those drives from LiveCD? From a different machine (linux) I can mount the drive using samba like so:
I have admin privileges on the Windows OS. In linux (or Windows beforehand), is it possible to take ownership of the directories so that I can do a rm -f -r <dir> ?