Ubuntu Servers :: Connect To Raid-1 Device And Set Path For Samba?
Feb 10, 2010
I currently have 3 drives installed to be used as a file server.
1 holds the Ubuntu OS
The other is the file server drive with 1 additional for backup using raid1.
2 Questions:
1) How do I get to the drive or Raid device to put files on the drive using command line (the 2 drives are sda & sdc that are connected to the raid1 device)?
2) How do I set the path in Samba to connect to this RAID drive.
I have three WD 1.5 GB harddrives. 2 of them already in a linear RAID also called Concatenated i think. (the same as JBOD). Can i add the third drive to the RAID without losing data? Update "Using mdadm software raid."
I've decided to toy around with LVM and mdadm this weekend. I can get everything working, and all is well, until I restart. After that, I no longer have any /dev/md0 device, which during the auto mount process, causes an error. I've looked through several HOWTOs, as well as the LVM/mdadm man pages, and I believe I've tracked it down to mdadm's "assemble" that is needed (so that LVM can see the md0 device).
Not exactly sure how to go about having this occur during the boot process to ensure that the LVM mapped drive is available for when fstab is read. In case it helps this is a base install of 10.10 server 64. I have four drives, the first is used for the OS and is not in the RAID array (nor LVM). The second and third are RAID1 (/dev/md0) and there is a volume group associated with /dev/md0. The last is a LVM, but not RAID, and it has its own volume group.
I am finally, and happily ditching Windows IIS, SQL Server, and ASP in favor of LAMP. Not only will I save a bunch of money on operating systems but I've found php and MySQL development to be much faster than their Microsoft counterparts.Currently I have two W2008 and two Ubuntu servers running and doing virtually parallel tasks. I want to can the W2008 machines but I am not 100% sure of my Ubuntu mirrors.Everything seems to be working fine. I've copied tons of data back and forth as a primitive test but sometimes things work fine for all the wrong reasons. Here's where I get confused.
Question 1:Do I need to partition the RAID device (MD0) and then format it?From my experience this is necessary to get the device to mount.
Question 2:In this case was it also necessary to format the individual drive partitions?
Question 3:If I do a daily cat /proc/mdstat is this all I need to do to check the drive status.
Question 4:Is there any other check I can do to assure that the mirrors are created, mounted, and operating correctly?
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
Last night i updated to 9.10, all good except i can no longer access my samba shares!!
here is the info from log.smbd after i stared it this afternoon
Code: smbd version 3.4.0 started. Copyright Andrew Tridgell and the Samba Team 1992-2009 [2010/01/12 16:35:57, 1] param/loadparm.c:6355(map_parameter) Unknown parameter encountered: "executable" [2010/01/12 16:35:57, 0] param/loadparm.c:7449(lp_do_parameter)
I've done this before, maybe ten times. This time is different. I don't know why, but Samba will not allow any clients to connect.
I've done: - installed samba - setup the samba shares - have a samba user/passwd - authentication = users - punched in the samba stuff for firewall - workgroup is set right
What the heck? I cannot get any client to connect. Not even the server machine can connect to itself through a client. What am I missing here???
Just for the record, I'm trying to connect to \SERVER:
Code:
Code:
I removed a whole lot of comment lines in the config file.
I'm currently setting up a dell server with hardware raid 1 on sas 6r. i got 4 sas installed on the server and configured to raid 1 as stated below, array 1: slot 0 & 1
array 2: slot 2 & 3
during the installation, the installer detect the array 2 as sda and array 1 as sdb.. so i proceed with installation on array 2. after completed the installation, the first reboot lead me to a 'grub-rescue" prompt. by following the guide at url Mode, i've noticed that the boot folder has changed to (hd1,1), which i believe it has changed to sdb1. default root device shows that prefix=(hd0,1)/grub.
if i try to connect to my samba server ( share ) from my windows xp ( or vista, i've tried both ) it says, that the network share cannot be found. i've installed all necessary rpms on my fedora 10, necessary for running a samba server:
after that, i've configured the smb.conf file, as follows:
Quote:
[root@*********** samba]# cat /etc/samba/smb.conf #======================= Global Settings ===================================== [global] # ----------------------- Netwrok Related Options ------------------------- workgroup = GROUP
[code]....
there is no iptables definition, or any other firewall installed, neither on the server nor the client. i've read through alot of howtos an manuals, but was not able to find the problem.
I just recently constructed a computer to create a nice Fedora Linux server to replace a Mac Mini server that I have been using for a few years. I'm attempting to create a Samba share for my machines, a Windoze 7 machine as well as a couple MacOS 10.6 machines. I've set up Samba, started the service, allowed Samba in the firewall, and used system-config-samba to set up a share with a user. I created a User called "space", and a share /media/peliculas/Movies (a mounted hard drive). When I type in \IP_ADDRESS in the address bar in windows 7, I get the option to log in. I log in correctly and I see two directories "space" and "Movies." Unfortunately when I click on one of them I get the following error message:
Windows cannot access \IP_ADDRESSMovies Check the spelling of the name. Otherwize, there might be a problem with your network. To try and identify and resolve network problems, click Diagnose. I also get something similar when I attempt to connect with my mac machine. Using "Connect to Server" I type in: smb:\IP_ADDRESS, and login, I have the option to mount "Movies" or "space." If I select any of these, I get the following error message: There was an error connecting to the server "IP_ADDRESS." Check the server name or IP addres, and then try again. If you are unable to resolve the problem contact your system administrator.
Also, if I type 'smbclient -U space -L IP_ADDRESS' I can see the "Movies" in the "Sharename." At this point I'm not sure what else to do, and have been trying to figure this out for the last few days (losing sleep due to being baffled). The only thing that I can think of is that I have something wrong with the smb.conf file. Here is what it currently displays:
Global Settings
workgroup = WORKGROUP server string = Samba Server Version %v log file = /var/log/samba/log.%m max log size = 50
[code].....
One last thing, when I check the log files, there are error messages that state: smbd/service.c:1009(make_connection_snum) '/home/space' does not exist or permission denied when connecting to [peliculas] Error was Permission denied --Windoze 7 and MacOS 10.6 log files
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I'm trying to use Windows 7 to connect to a Samba server (running Ubuntu 11.04).
Server is named Mars. Below is my Samba configuration file. I can ping the server, and connect to it via RDP and ssh, so i know its not a network connectivity issue. What else could it be?
Code: [global] log file = /var/log/samba/log.%m server string = %h server (Samba, Ubuntu) winbind enum users = no force group = nobody
Currently I have 5 HID's I would like to use on the system at any given time. However, I have the problem of them using different device paths depending on what order I plug them in... Which wreaks havoc on qjoypad and also my music software which looks for a specific device number, which may or may not be the same as it was last time.
For example, I have 2 rock band drum controllers which I use to send MIDI signals. I need them always assigned to the same device path, so that I don't have to change the settings in my music software every time I run it, to make sure it's looking at the right controller
Or if I am going to play games with a gamepad, but I also have the drums hooked up, the gamepad may be assigned to /dev/input/js2, so I create my qjoypad profile.... Let's say the next day I reboot with the drums not plugged in, the gamepad is now /dev/input/js0, and the qjoypad profile won't work because it is looking for that gamepad at /dev/input/js2... So I would either have to create a new profile, or hook the drums back up in the same order ... Or something
It's just a mess...
Is there any way to tell Ubuntu to say "/dev/input/js0 is always Playstation gamepad", "/dev/input/js1 is always drum pad 1", etc.?
Or some way to do away with /dev/input scheme altogether and somehow link directly to the name of the device?
In Linux, Is there a way to remember/change a path to a USB device? In my case, I need linux to remember that my USB serial adapter will stay on /dev/ttyUSB0, but when I unplug it and plug it back in, it switches to /dev/ttyUSB1. I'm using a debian-based distro(mint)
a new hardisk and a reinstall later I find myself face with 2 problems now. firstly I followed, [URL]... which seemed to work fine, accross the network I can "see" all the workgroup computers. Now try login to karmic's or (other linux box) jaunty, can't find network path. tried turning off the firewalls, still no go. the two linux boxes can chat merrily, and the 2 windows boxes can chat, but to each other. however after fidling a bit , on karmic i now get
Quote: Could not display "network:///" Nautilus cannot handle "network" locations
so firstly how do i reinstall everything, the how deal with windows.
I wish to prevent the samba messages (mainly nmbd and winbindd) from appearing in the system log (/var/log/messages). I want to allow samba logging to the standard samba logfiles, but prevent the syslog getting clogged up by samba. I added syslog = 0 to smb.conf and reloaded the config but the messages were still appearing. I also tried the following (and restarted the syslog via /sbin/service syslog restart) # Suppress messages from samba.
For interests sake the messages I'm getting are below (I'm not concerned about the messages themselves, I can chase them up at my leisure via the samba logs) Mar 18 09:58:29 SERVER nmbd[3808]: query_name_response: Multiple (2) responses received for a query on subnet xx.yy.z.zz for name DOMAIN<1d>. Mar 18 09:58:29 SERVER nmbd[3808]: This response was from IP xx.yy.z.zz, reporting an IP address of xx.yy.z.zz.
Just set up a home server with Ubuntu 10.10 desktop. I've set up a software RAID 1 device using System/Administration/Disk Utility which seems to work well. However when I reboot the machine and try and access the drive I get the error that 'authentication is required to start this raid device' and then I have to enter my password, after which all is good
let's say this system has 3 hard drives. Drive #1 and #2 are RAID 0 and Windows7 lives there. It is a hardware RAID, not software.
On Drive #3 Ubuntu has been installed using WUBI - it boots up and works okay - but it does not see the RAID array.
Do I just need a linux driver to be able to see & mount my "Windows" RAID0 array? Or is this even possible? Can anyone point me in the right direction?
I am running Ubuntu 10.10 64. I have a RAID array consisting of two 1 TB HDD's, controlled by my on-board RAID controller. I have a dual-boot of Ubuntu 10.10 and Windows. The RAID array is mapped in /dev/mapper, and here is the output of sudo dmraid -ay
Code: RAID set "pdc_dedfhcfdee" already active RAID set "pdc_dedfhcfdee1" already active RAID set "pdc_dedfhcfdee2" already active RAID set "pdc_dedfhcfdee3" already active
I'm having trouble with Ubuntu 10.10 and stable device names. When I installed Ubuntu, the root drive was the only one in the machine; it obviously got /dev/sda.
After the base installation, I installed three additional 2TB drives to make RAID-5 array. Ubuntu renamed the root drive to /dev/sdd. While annoying I lived with it.
After creating a single partition set to "Linux raid autodetect" on each drive, I created the RAID-5 array:
Code:
All was going well until a reboot. When rebooting Ubuntu decided to make the root drive /dev/sda this time and now mdadm --detail /dev/md0 reports:
Code:
How to fix the array and make the device names stable?
I have a raid 0 setup with 2 x 1TB drives. I have an ASUS P8P67 LE motherboard and am using Interl RST for the Raid setup. I'm utterly ignorant of raid and therefore forgive any mistakes... I already had windows 7 installed and was attempting to dual boot ubuntu.
I installed Ubuntu from CD. The raid was picked up properly as only one drive by ubuntu. So it picked up the windows MBR and the main windows partition. I resized the main partition and used the "install ubuntu and windows 7 side by side" option. Installation went fine but once I restarted the PC I was welcomed by a grub rescue screen with the message: "error: no such device e196.....". Edit: I used the Windows 7 disc to repair the windows bootloader so I can now boot into Windows 7.
Before doing so I used gparted on the live cd to check the partitions on the drives. The only ones present were the MBR and windows one. So ubuntu seemingly didn't install... Although GRUB did... I was advised by someone on the ubuntu IRC chat to avoid trying to reinstall ubuntu at that point just in case there was an error in the partitioning process. I've since checked the state of the partitions from within windows and there's the MBR partiton, the windows partiton AND the partition that I created for ubuntu... 965MB of the partition that I created is listed as used space as well...
Just set up a home server with Ubuntu 10.10 desktop. I've set up a software RAID 1 device using System/Administration/Disk Utility which seems to work well. However when I reboot the machine and try and access the drive I get the error that 'authentication is required to start this raid device' and then I have to enter my password, after which all is good.
I look after some small office where computers run ubuntu. Sometimes they phone me for help. For that reason, I decided to install ubuntu alongside with my slack.I seem to have problems with lilo configuration. Ubuntu is installed on software raid :
/boot = md0 (raid 1 of sda1+sdb1) / = md1 (raid 0 of sda2+sdb2)
this is my output when I try to compile samba 4.0.0 alpha 7 in Ubuntu using the spec file provided in the samba packages:
bin/mergedobj/samba-util.o: In function `file_lines_parse': (.text+0x595c): undefined reference to `_talloc_steal' bin/mergedobj/samba-util.o: In function `data_blob_talloc_named':[code]....
After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it.
* Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0,
[code]...
In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.)
Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...]
and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!
I have 2 identical disks, /dev/sda and /dev/sdb. I have a raid-2 configuration (/dev/md0) on /dev/sda3 and /dev/sdb3 partitions. On top of /dev/md0, I am running LVM. There are also partitions sda4 and sdb4 following sda3 and sdb3 respectively but the data in there are not important. What I want to do is delete the sda4 and sdb4 partitions and extend sda3 and sdb3 to the end of disk, and grow the md0 and the volume group of course *without* loss of data.