I am interested in turning my home server into something that I can store backups on. I do photography and therefore have a lot of photos. I use Mac OS X for my photo editing, so it must be accessible from my Macbook. I am new when it comes network storage servers, so what would be the best solution for me to be able to backup my photos seamlessly? I would like it easy enough so others can backup files without any terminal commands and such. What would you suggest? CIFS? RAID? iSCSI?
I built a CentOS 5 Linux cluster with GFS storage using local RAID volume and share it with gnbd_export/import on two web-servers. Now I need to expand that storage to another servers local volume. I saw the picture in the manual, but I don`t know, how can I create that scheme.
I can use gnbd_export on the second server and gnbd_import on the first. In that case I will have two volumes on the first storage and I can expand volume group, logical volume, etc on it.
Greetings Fellow Knights Of The Penguin Clan....!I am having issues with device mapper seeing a 275GB Raid5 Lun from my SAN storage.I'm using IBM 2145 San Volume Controller.I am able to see a 40 GB Raid 10 device though..
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I am trying to get raid 5 set up as storage. I have karmic with myth set up on primary drive and wish to just use raid 5 for storage with jfs filesystem.
I have never done this before but stood up a new Debian (testing) x64 system. It only has 146GB available for RAID1 so I created a 500GB iSCSI LUN on my NAS device on the network and am really confused how I can attach my Debian to the iSCSI LUN I created. Right now the OS is installed all locally on the machine but I would like the iSCSI LUN to be the /home directory for mail storage. Is this possible or do I need to mount the LUN to a newly created folder / mount point that is locally attached?
I have the latest ubuntu V10 and trying to set up a raid 5 to use as storage. I have 3 1 TB drives along with the 160 GB OS drive. Is what I want to do possible and is there a gui interface to perform this or clear instructions on how to accomplish this? I am a novice when it comes to Linux but trying to ween myself off of Microsoft.
I am in a situation where I am stuck with a LVM cleanup process. Although I know a lot about AIX LVM , but this is first time I am working with Linux LVM2. Problem is that I created two RAID arrays on storage, which appeared as mpath0 & mpath1 devices (multipath) on RHEL. I created logical volumes and volume groups and every thing was fine till I decided to clean the storage arrays and ran following script:
#!/bin/sh cat /scripts/numbers | while read numbers do lvremove -f /dev/vg$numbers/lv_vg$numbers vgremove -f vg$numbers pvremove -f /dev/mapper/mpath$numbersp1 done
Please note that numbers was a file in same directory, having numbers 1 and 2 in separate line. Scripts worked well and i was able to delete definitions properly (however I now think I missed one parted command to remove the partition definition from mpath device. When I created three new arrays, I got devices from mpath2 to mpath5 on linux and then I created vg0 to vg2. By mistake, I ran above script again for cleanup purpose and now I got following error message
Cant remove physical volume /dev/mapper/mpath2p1 of volume group vg0 without ff[/B]
Now after doing mind search, I now realize that I have messed up (particularly because mpath devices did not map in sequence to vg devices and mapping was like mpath2 --- to ---- vg0 and onwards). Now how I can cleanup the lvm definitions? should i go for pvremove -ff flag or investigate further? I am not concerned about data, I just want to cleanup these pv/vg/lv/mpath definations so that lvm can be cleaned up properly and I can start over with new raid arrays from storage?
I recently bought a new system that has an Intel Matrix Storage Manager "RAID controller" (ICH10R/DO) on it. I'm a bit baffled over what this really is. I see at [URL] that this controller is supported by the Linux dmraid and mdadm commands and supported in the 2.6 kernel version for quite a while. This looks as though it is some sort of convergence of a brain-dead hardware chipset that requires software installed in the OS to manage it. Kind of reminds me of the wimpy Windows modems of the past. How I deployed. I set up two Seagate ST31500541AS disks as a mirrored pair in the hardware controller interface (CTL-I setup after POST).
I installed Fedora 12 in the usual fashion, though was confusing considering I expected a single "RAID device" to be seen by Anaconda. I went ahead and set up the two native /dev/sda and /dev/sdb as a mirrored RAID device on installation (mdadm under the cover). Recently, Palimpsest is insisting that I have a disk problem with "Disk has many bad sectors" error on /dev/sdb. When I run "dmraid -s" it tells me that the meta device is OK and I see no hardware errors in the messages log. I'm not having kernel panics as others seem to have had on RHEL 5.x.
I am running SuSE 11.3 ( 2.6.34.7-0.7-desktop) on a Dell Laptop I am using an external NAS (QNAP-809pro) that connects to the laptop via iSCSI When my laptop boots I get an error that stops the boot process and gives me the filesystem repair terminal: ther I have to comment out the iSCSI lines from /etc/fstab and reboot normally. This is my fstab with commented-out iscsi mount lines
The intention is to have this system dual-boot. When i first put it together, i decided to setup a raid5 array spanning 3 sata drives. I installed Windows 7 first, decided i'd get to Linux later. I left 150mb or so at the beginning of the array for /boot, and about 200gb at the end for my linux install. i'm getting to the linux install. My distro of choice is Fedora 12. I start the setup, and at the point where it's time to partition, the installer tells me that its unable to find any suitable storage devices.
I Crtl-Alt-F2 to a console, and fdisk -l. Fdisk reports three individual drives which all have partitions already. All have free space. None make sense. So i turned to google, and found some threads which explain that this chip doesn't run a true raid, rather its what's been referred to as fake raid. Which is that it depends on the windows driver in order to actually present the array to the OS, and that the best way to get by that on linux, is to break the array, and use LVM instead.
That's all well and good, but i lose two things in doing that. First i lose the resiliency of raid 5, and second, well, what does that do to my windows install? I've considered moving all of my data from windows to other machines, and then just starting from scratch, but i'd really much prefer a method of using the chips fake raid in linux. Is there a driver, or module which i can install to make this happen?
Im trying to find out What all network hardware is required for a successfull Iscsi setup. Example Do you need two network cards that support it. Or do you just need one network Iscsi adapter for the storage box, and the other machine just have a normal NIC. What type of Switch would be needed as well for decent transfer rates.
I am trying to do the cluster storage with the Rock Cluster storage operating system . I have install the rock cluster on the main server . and I want to connect the client with the PXE boot . when I starting the client then it boot from the PXE boot into the mode of compute node. but it ask the where is to be stored the I have give the path..then server says that not able to find the directory path.
Steps:- 1) insert-ethers 2) client is started with the PXE boot. 3) it detects the dhcp . 4) At last it demand where is to be started by cd-rom, harddisk ,nfs etc.
then I started with the nfs and I have given the lanIp of server.and server is detecting the client but client is not finding the filesystem directory. into export partition directory but it is not taking path.
PATH:- /export/rock/install it is not finding this path then it is not able to start the o/s from PXE boot..Is there any solution and manual of rock or anyother solution then u can reply to me...
i can not find the network storage drive on my MS network using Ubuntu.i can find other computer using xSMBrowser but not the hard drive connected to my router (LAN)i have tried samba and a few others
I'm trying to increase the size of an iSCSI device. On the LUN side, the provision size has been expanded already. I expanded the size of this device from 15GB to 30GB.
I have mounted a windows network share using the gnome desktop environment, using Places -> Connect To server.The network share is OK, and I have the icon on my desktop and can see all the files.I want to be able to use this network as well in the console, so I need the mount point.What is the location on the filesystem were this networkdrive gets mounted? I find nothing in /mnt and nothing in /media also using mount to look at the registered mounts, there is no entry for the networkdrive.Nevertheless, I have this networkdrive now open in my desktop, and have an option to unmount it.I know that using the mount.cifs command you can specify the mounting point.
1. 11.4 x64. 2. Solaris SMB server. 3. Gigabit LAN 4. mounted shares from that server (fstab entries)
write speed: 80-90-100 MB/s read speed extremely slow: 3-4-5 MB/s (really funny - our administrator shoked, but i'm not fun, i need fast lan for work)But when i reboot to windows 7 - i have 60-70-80 MB/s in both directions. Read and Write - nice.What happened? kernel updated and all last updates is applied (exclude kopete-because i use old kopete with animated tray icon).I have to tried many tunes like: "noatime" "directio" and also in /etc/modprobe.d - put conf file with: options cifs CIFSMaxBufSize=130048
I mount the share on my Windows server with following command:
Code: mount -t cifs -o username=Administrator,password='mypassword',rw,dir_mode=0777,file_mode=0777,nobrl,uid=1000,gid=100 //10.8.0.1/users /mnt/
In my 11.3 computer it works well. I opening, copying files like I do in local filesystem. At the same time it's not working well on my 11.4 computer: the share mounts without errors, I see all files, can copy them from server to local computer. But when I'm trying to make copies to server, sometimes I receive messages like "Error writing file ...". Not always, but in the most part of my attempts. find the part of my /var/log/messages file:
I am running RHEL5.5 its a fresh install and we are testing Xen Virtualization. We are wanting to use our iSCSI SAN for the VMs. I have created the initiator iqn, and discovered the target address. We are connected to the target, but there is no new block device in /dev.
So after having spent the past half year preparing to abandon Windows and come over to Debian I finally made the switch last night only to realize I forgot one important thing... I didn't figure out how to map the network drive on my Windows server (currently learning to replace this with Debian as well) to my Debian system.
I have read about 15 links but keep getting the following error: Mount Error (6): No such device or address
Here is what I'm trying to enter into my terminal (with important bits removed for security of course)
mount -t cifs //xxx.xxx.xxx.xxx/Network_Storage/ -o username=xxx,password=xxx /mnt/cifs
I used command as followings. nothing special. mount -t cifs //192.168.55.53/windows$/Home /mnt/ -o user=username%password It works well after mounted. But mounting itself takes 1-2 minutes terribly. After mounted successfully, file transfer speed looks to be normal.
i'm trying to setup a permanent CIFS share from my nas, but it keeps prompting for a password dispite GUEST access set on the share.FStab is as follows:
Code: //192.168.0.253/media/ /mnt/nas1_media/ cifs guest,_netdev 0 0 if i do
I'm trying to make a fresh install of Ubuntu 10.10 Server 64bit but is stuck with the Partitioning Disk part and Ubuntu 10.04 LTS provides the same error. The installation wizard gives me a menu:
!! Partitioning disks This menu allows you to configure iSCSI volumes iSCSI configuration actions
I am trying to connect the one of server RHEL5.4 to the IBM iSCSI storage. Server is equipped with 2 single port Qlogic iSCSI HBA(TOE). RHEL detected the HBA and installed driver itself (qla3XXX). I have configured the HBA ip address in the range of iSCSI host port of storage. Both of the HBA is connecting to the two different controller of storage. I have discovered the storage using command iscsiadm -m discovery command for both of the controller and it went through fine. But problem is whenever server is restarting if both of the hba is connected to the storage then server will not detect the volumes which is mapped to the server and then to detect the volume I need to run "mppBusRescan" and "vgscan" command each time. If only one path is connected it is fine.
I have a fairly simple iSCSI setup using two devices, but they have swapped names on different machines. running CentOs 5.3 ia64, and using iscsi-initiator-utils-6.2.0.868-0.18.el5
vm1: [root@vm1 ~]# fdisk -l Disk /dev/xvda: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
[Code].....
Any way to get iSCSI to mount the devices as consistent device names ?
We have a Centos 5.6 server mounting two iSCSI volumes from an HP P2000 storage array. Multipathd is also running, and this has been working well for the few months we have been using it. The two volumes presented to the server were created in LVM, and worked without problem.We had a requirement to reboot the server, and now the iSCSI volumes will no longer mount. From what I can tell, the iSCSI connection is working ok, as I can see the correct sessions, and if I run 'fdisk -l' I can see the iSCSI block devices, but the O/S isn't seeing the filesystems. Any LVM command does not show the volumes at all, and 'vgchange -a y' only lists the local boot volume, not the iSCSI volumes. My concern is that, the output of 'fdisk -l' says 'Disk /dev/xxx doesn't contain a valid partition table' for all the iSCSI devices. Research shows that performing the vgchange -a y command should automatically mount any VG's that aren't showing, but it doesn't work.
There's a lot of data on these iSCSI volumes, and I'm no LVM expert. I've read that some have had problems where LVM starts before iSCSI and things get a bit messed up, but I don't know if this is the case here (I can't tell), but if there's a way of switching this round that might help, I'm prepared to give it a go.There was absolutely no indication there were any problems with these volumes, so corruption is highly unlikely.
I just made a fresh install of OpenSUSE 11.4-Tumbleweed and have the latest updates. However fstab lines I've used in the past are not working.
Here's an example of two: //IPADDRESS/share /home/user/mount cifs credentials=/home/user/.scripts/.creds,_netdev,uid=client_user,gid=users 0 0 //IPADDRESS/share /home/user/mount cifs guest,_netdev,uid=client_user,gid=users
I can execute a command
Code: sudo mount /home/user/mount and it works, but I'm wanting all my fstab lines to automount at boot as on other machines.
I have a SMB share being mounted during boot using a /etc/fstab entry.All that seems to work fine, but on shutdown or reboot I found that the system hangs for a variable period trying to unmount the share. It appears from the log that the unmount is happening after the network connections are closed.Is there someway around this, or is there some other way I should be mounting the share so that it is closes successfully at restart or shutdown?