Everything is fine until here but after reboot the device wont mount on /orac it says special device not available i found that that md02 device is not in active state
i tried deleting it and recreating it but no use still it wont persist a reboot
I've set up a test machine with Ubuntu 10.04. It has two drives making one RAID 1 drive using mdadm. This RAID drive is where /home is mounted. I like to break things just to see what happens and to know what to do to resolve it before it happens for real. So I physically removed one of the drives that made up the RAID 1 (while it was off).
I then rebooted the machine. I thought since it was mirrored then /home would mount correctly using the other mirror. What actually happened was on the Ubuntu splash screen it said there was a problem mounting /home. I skipped the mounting, logged in and looked in /proc/mdstat. It reported one drive that was inactive it did not report the missing drive.
I am using Ubuntu 10.10. I have a system set up with 1tb HD. I also have another 1tb HD which I'd like to use to mirror the other drive. So if the primary HD fails I can boot and operate from the mirrored drive. I've read that this is possible by using Raid. however I am confused if it is possible to set-up with a HD which is already set-up Ubuntu system. Also what what I can make out the mother board does not have a raid option.
I planned to setup raid 1 mirroring for my small home environment. Then I selected two new harddisk and connected to my system. I inserted my fedora dvd and I clicked raid button in the graphical installation process by refering redhat docs. I installed successfully in /dev/sda /dev/sdb it works fine. For testing purpose I removed one harddisk /dev/sda. My system didnt boot it shows grub error. Why this happened? Since I have configured raid mirroring why the system is not booting from second harddisk /dev/sdb.
I have created software raid 5 configurations on the second harddrive its working fine and i have edited fstab file for auto mounting when it reboot but when i reboot the computer raid doesn't work i have to re-create the arrays by typing "mdadm --create" command again and mount again manually ,is there anywhere i can do this once without retyping the commands again after rebooting and i am also using redhat 5
I need to prevent udev from creating the /dev/v4l/by-path/* and /dev/v4l/by-id/* files upon connecting my webcam. The problem is that Kopete doesn't want to display the video if these files are present. It works fine if I remove them, but I'd rather not have them created in the first place, since they seem to be completely useless anyway.
After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it.
* Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0,
[code]...
In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.)
Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...]
and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!
I have 2 identical disks, /dev/sda and /dev/sdb. I have a raid-2 configuration (/dev/md0) on /dev/sda3 and /dev/sdb3 partitions. On top of /dev/md0, I am running LVM. There are also partitions sda4 and sdb4 following sda3 and sdb3 respectively but the data in there are not important. What I want to do is delete the sda4 and sdb4 partitions and extend sda3 and sdb3 to the end of disk, and grow the md0 and the volume group of course *without* loss of data.
I just got 512M RAM so i thought to switch off all the programs that i most likely will not need. The big ones, i think might be, gdm, gnome, metacity. I think a plain X will suit my purpose.
Just set up a home server with Ubuntu 10.10 desktop. I've set up a software RAID 1 device using System/Administration/Disk Utility which seems to work well. However when I reboot the machine and try and access the drive I get the error that 'authentication is required to start this raid device' and then I have to enter my password, after which all is good.
Recently did an apt update and upgrade to my CLI only Lenny server. Upon reboot I get an "ATA softreset failed (device not ready)" for all of my SATA drives. I noticed the upgrade changed the kernel to "Linux debian 2.6.26-2-amd64" (do have 64bit CPU).Once loaded to a command prompt I can assemble my raid 6 array with the command "mdadm --assemble /dev/sda to sdd" then mount it with mount -a. But transfers to the array areorribly slow ~1mbs.Upon reboot i get the same errors and have to assemble my array every time
We ran out of space on our server hard drive, so I installed 2 x 1GB drives, set them up as a software RAID1 array, copied the contents of /home to it, mounted it as /home for testing. Everything OK, so I unmounted it, deleted the contents of the /home folders (don't worry, we're backed up), then remounted the array. Everything was fine until we rebooted. Now I can't access the array at all; during booting the error "mount: special device /dev/md1 does not exist" comes up twice, and manually trying toe same issue. The relevant line from fstab reads:
/dev/md1 /home ext3 defaults 0 0
However, using webmin shows only md0, the RAID0 device on which the OSD was originally installed. There is no /dev/md1 device file. The mdadm.conf file reads as follows:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid0 num-devices=2 uuid=76fd4050:fb820568:c9bd3a59:ad3e70b0
So it's not listed; I'm assuming this is significant. Am I right, and whether I am or not, what can I do?
I would like to remove almost all of the Privacy Evading data that a digital camera is generating, such as EXIF DATA. EXIF DATA: Camera Brand, Camera Model, Date taken, Exposure Time, Flash Fired, Focal Length, Location (if you are using iPhone with categorization by location, if enabled), Metering Mode, etc.
May you, please, write a script which does that job for multiple files?Exiv2 seem to reduce more weight than Jhead so I'll use the command exiv2.That's, generally, what I want the script to do:Retrieve the (Modified) date of a file.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I have installed ubuntustudio 9.10 on my dell dimension 1100 desktop and im trying to setup raid-1 because i'm constantly worried that my hard disk is going to fail. i have 2 drives. one 40gb and one 80gb. so, i created a 40gb partition on my 80gb drive and i want to raid this partition with the 40gb drive. is this possible? and am i right in thinking that i can raid everything including /boot?
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I have a system that has the following partitions:
Now SDC is a new drive I added. I would like to pool that new drive with the raided drives to give myself more space on my existing system (and structure). Is this possible since my raid already has data on it?
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code: [root@kilchis etc]# fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
Currently I backing up the MBR, the C and the other partitions to an external USB HDD and from there I restore them if needed. I use the SystemRescueCd and commands like dd if=/dev/sd* of=/mnt/PC_name/backupmbr.1 count=1 bs=512 and ntfsclone --save-image --output /mnt/PC_name/PC_name_c.img /dev/sd*1 etc. I want to clone the HDD the way, however, that I omit the external USB drive. I want to connect the new HDD to the PC and do the cloning directly from one disc to the other.
My questions are:
- Can you provide me with the exact command? - Is that a difference if the disc is SATA or IDE? - Can I copy the disc even if the old disc don't wan to boot?
I have a Linux server that runs the Sybase DB. Sybase suggests using character devices to access raw devices rather than O_DIRECT to block devices, or cooked FS's. So, I went ahead and configured /etc/sysconfig/rawdevices as such:
This works fine. I set 'chkconfig rawdevices on' and all is well. I read that this method is deprecated and went about trying to accomplish the same via Udev rules. I already use udev rules in /etc/udev/rules.d/60-raw.rules to set permissions on these devices, i.e. ACTION=="add", KERNEL=="raw*", OWNER=="sybase", GROUP=="sybase", MODE=="0660"
That works fine. I even set symbolic links: KERNEL=="raw1", SYMLINK+="vg01/rtempdb" KERNEL=="raw2", SYMLINK+="vg01/rtestdb1" KERNEL=="raw3", SYMLINK+="vg01/rfakedb2"
But I cannot seem to get the actual device creation piece to work within udev (it only works using rawdevices). I've tried: ACTION=="add", KERNEL=="vg01/tempdb", RUN+="/bin/raw /dev/raw/raw1 %N"
No errors, but nothing happens. The device just doesn't create. I've also tried doing it by passing major and minor numbers. Is it possible to get all of this into udev rules or am I stuck with rawdevices? I'm also utterly confused as to the future of rawdevices... the raw man page said it was deprecated, and now at v5.5 it has that piece taken out. Also RHEL 5.3 dropped support for rawdevices in initscripts only to add itback in 5.4. I'm an admin, not a DBA, so I cannot say if this is a bad or good way, only that it is the way the vendor supports and recommends, so it is the way that I must go... just trying to make it work as "un-deprecated" and cleanly as possible.
I'm putting a server together and have run into a boot up problem. (I thought about putting this in the server forum, but it might be a more generic problem that others have seen and know how to rectify.) The install seems to have gone just fine. I have the /boot partition on an internal IDE drive. The rest of that drive and another are mirrored in a Raid0 configuration (using the Linux software to do that) for data storage. The swap partition is a part of the Raid5 SCSI array that also has the / (root) partition on it.
After installation it would not finish the booting process. I suspected that GRUB didn't like all the Raid arrays and such, but it seems to be fine. I can say that because the machine will boot into rescue mode with the GUI splash screen and I have access to the whole directory tree. I have already searched on-line and following prudent advice, ran the yum update while in the chroot /mnt/sysimage mode. That only took overnight to download and most of this morning to complete. Still no dice. Used vim to delete the rhgb quiet commands in the grub.conf file so I could see where the kernel seems to be hanging.
So right after the "Creating initial device nodes" is a line about my generic PS2 wheel mouse. So I tried a USB mouse. Got more output so tried swapping out to a USB keyboard. Got a little further with more information about input devices, but still stops. Also, I tried a PCI video card just to make sure the onboard video wasn't the problem - no change. So, if someone in the Fedora community knows what loads up or is configured right after the mouse and keyboard, I might be able to figure out what's causing the computer to hang during the boot process.
I am running RHEL5.5 its a fresh install and we are testing Xen Virtualization. We are wanting to use our iSCSI SAN for the VMs. I have created the initiator iqn, and discovered the target address. We are connected to the target, but there is no new block device in /dev.
I have three WD 1.5 GB harddrives. 2 of them already in a linear RAID also called Concatenated i think. (the same as JBOD). Can i add the third drive to the RAID without losing data? Update "Using mdadm software raid."
I look after some small office where computers run ubuntu. Sometimes they phone me for help. For that reason, I decided to install ubuntu alongside with my slack.I seem to have problems with lilo configuration. Ubuntu is installed on software raid :
/boot = md0 (raid 1 of sda1+sdb1) / = md1 (raid 0 of sda2+sdb2)
I have 70 folders as an output form some software they are called folder1, folder2 folder70. I want to find a way to automatically copy the contents from each folder to one big folder? So all the files are in one folder without the directories? I was thinking of something using the mv command but I'm not sure how to do this. Ok I think I have answered my own question. I did this:
# cp folder*/* bigfolder
I used cp in case it went wrong, it worked so I deleted the previous dir's.
I'm trying to make a webpage that will display the bash variables I have in a file. These variables are used in a bash script that is run from on my server.The file looks like this:
I have 5 FTP users that upload files (and subdirectories) in their home directory, i need to mirror theese directories beetween them and with a "master" directory (accessible from a 6th user). Files can contain spaces or others special caracters. All the files are in the same filesystem, and i want to use hard link because i don't want to waste 5 time the space of a single file. I tried with find but i cannot handle spaces in it.
I'm not entirely a newbie, but this seems like such a simple question I'm not sure where else to ask it. I checked through the various HOWTOs and searched already and didn't find a clear answer, and I want to know for sure before we start investing in hardware. Is is possible to create a RAID1 (mirroring only) array with 3 live drives, rather than with 2 live plus a spare? Our goal is to have 3 drives in a hot-swap bay, and be able to pull and replace one drive periodically as a full backup. If I do: