Fedora :: Eth0,1,2,3 Become Eth4,5,6,7 After Cloning Via Raid 1 Rebuild?
Aug 10, 2011
I've cloned a machine by removing a HDD from a Raid mirror set putting it into another machine and powering on and fsck'ing a few times and everything is great apart from the ethernet port numbering has gone a bit wonky.
The cloned machine which must have a specific configuration using eth0,1,2,3 naming convention however when I boot the freshly cloned machine eth4,5,6,7 show up and eth0,1,2,3, are missing (as it detects the new machines ethernet ports)
Is there some configuration file I can delete, reboot the machine and then Fedora rebuilds the file using eth0,1,2,3, and populating with the correct hardware address.I need to clone this machine lots and lots and lots of times and the manual why I've figured out is a little long winded.
I've got a server running a software Raid for SATA disks on a P5E motherboard.
I had to had a lot of memory on thi sserver, then I have had to flash the bios. This has resetted the software raid on the disks, then when I boot, i got a Kernel Panic cause it doesn't find anything ...
how to rebuild the raid ? I can boot on a live cd, or anything else, but don't know how to do it without loosing my data.
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?
My home-backup server, with 8*2TB disks won't boot anymore. Two disks failed at the same time and i rebuilt the raid 6 array without any problem, but now i can't boot the os. I'm using ubuntu server, 10.10. I've made screens of the displays to don't copy everything here. The problem at the boot:
And the Grub config: It's not a production server, but i would like to have it online. I've tried for the lasts 2 days (just a couple hours a day) but without success. I was suggested to do "mount -o remount,rw /" and than edit /etc/fstab, but it get the file don't exist error.
I have a 5 disk raid 5 array that is composed of SATA A:0,1; SATA B: 0,1, and SATA C:0, and one of the disks (SATA A:0) recently went bad on me. I have an ICP raid controller that is about 5 years old. I replaced SATA A:0. After rebooting, I went into the controller and verified that it saw the disk in the hard-disk info section...there I noticed that in the "status" section, that the SATA C:0, SATA B:1 disks were listed as being "in array", the SATA A disks were blank, and the SATA B:0 disk was listed as "fragment". When I go into the "repair array" section, the controller tells me that there are no arrays that are in failure, error, or need to be rebuilt.
This puzzles me, as I thought the controller would know that the array needs to be rebuilt after replacing the disk and I don't see a way to initiate a rebuild. If I just let the server boot after replacing the disk, then I get back that there are the correct number of disks in the raid 5 and that it is ready, however, the screen then goes blank and I get a blinking cursor and the system seems to hang. There are no activity lights on any of the drives associated with the raid 5, which makes me think that the system is not rebuilding the array at this point.
I have never preformed a rebuild of an RAID array. I am collecting resources, which details how to build an RAID 5 array when one drive has failed. Does the BIOS on the RAID controller card start to rebuild the data on the new drive once it is installed?
Its from a Synology Box with 3 disks, which one is damaged. But this disk wasnt in use.Take a look on the raid-size of 493 GB - and the both available disks with 250GB..) On the others there were a linear raid. during this damaged disk the synology-device tells me, that the volume was crashed.But it look like, that this disk was not mounted into this volume.Quote:
DiskStation> mdadm --detail /dev/md2 /dev/md2: Version : 00.90
I have recently installed Ubuntu 10.04.1 lts server on my Intel "fakeraid" (software raid) (2x250 sata).To test my RAID 1 I turned off one HD and start the system.The first screen (Intel software screen) show Status = Degraded, but the system starts normally with just one HD.Then I turned off the system and turned on the HD again, so the first screen (Intel software screen) shows Status = Rebuild. If I enter in the software raid panel the folowing message is showed: "Volumes with "Rebuild" status will be rebuilt within the operating system"The system starts normally... but this message status stays permanently even I restart the system again
I wonder how to attach new sata hard disk to software array where are two disk and one is crashed (this is a mirroring mode=Raid 1).Situation like this:I unpluged crashed disk and I buy the similar one and plug in What Next should I do?
I recently installed a server with Software RAID. I tested by powering it down, unplugging one drive and powering it up. Magically, it worked!I found out later that I have to manually add individual devices like md1 to sda2 md2 to sda4. I got all of them added and rebuilt but my question is: Is there a way to make it so that if I "removed" a drive and put it back, the system will senses the new drive and rebuilds based on some internal table?
I have been using F12 for while now, I had applied update when it available. After the latest update, my laptop have lot of Kernel Crash error and it hang up more often. I try to upgrade to the F13 or F14 but not successful at all. My only hope now is to rebuild the Kernel to the early Kernel. How to rebuild the Kernel to 2.6.31.5. My current Kernel is 2.6.32.26-175.fc12.i686.PAE Gnome 2.28.2 1G memory Intel PIII mobile 1.13G
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
my problem is inkscape-0.48.0-1.fc14.3.src.rpm in fedora 14 64 bittry rebuild --source inkscape-0.48.0-1.fc14.3.src.rpm and this my error:I think something wrong pkgconfig .
attributes.cpp:20:32: fatal error: glib.h: No such file or directory compilation terminated. arc-context.cpp:21:28: fatal error: gdk/gdkkeysyms.h: No such file or directory
Does anyone know of a way to do a mass build of srpms? For example if I wanted to build loads of packages but with different default installation paths or compiler flags. Are there any tools to do this?
I tried to install the Catalyst 9.9 driver on Fedora 11 64bit. Only when I was finished with the guide, I read the comments that said it wouldn't work on 2.6.30 kernel.I've written over my old xorg.conf, blacklisted radeon and radeonhd, restarted my machine.When i restarted my machine X wouldn't work ofcourse and all I got is some red coulor in the top of the screen, and no access to terminal via ALT+F2->F6. only way to get access is to add "telinit 1" to the startup line in GRUB
I've tried
Code:
X -configure
and
Code:
system-config-display --reconfig
The first dosen't work and the last dosen't seem to be installed....since i'm on wireless the network has problems to connect....I can pull a cable but not right now.....
When I first switched from windoze to Fedora I trimed a bit of space off the end of the HDD, formatted it to ext3 and installed Fedora 14 there. I have now completely rebuilt the machine and put a 2TB drive in. My intention was to upgrade to Fedora 15, but after a few weeks trying to get the new gnome to anything resembling useful, I gave up and decided to go back to the reliable 14.
I tried the old drive, and everything worked great, so I though no problem, clone that over to the new drive, and job done, no need to mess about for weeks getting all my settings back. I booted from the old drive with both connected and ran gparted, It sees both drives but won't let me copy the old partition. It complains about 'LMV is not yet supported' I tried booting from a gparted ISO with the same result.
How can I get this sorted? I've got work needing done, I don't have time to start from scratch (*AGAIN*),
I'm trying to clone 2 raid sets using dd.( have done this successfully many times in the past) this time however running into issues.
dd stops with a 'input/output error'
dmesg shows:
A little above this i find
Background: this is a proliant sevrer & has got XFS filesystem on it. Last weeek it showed some XFS errors, tried to do a repair but didnt work.So thought would clone the raid set from a good 'source' server. ( we need this for a test purpose )
Before cloning I deleted raid & created again. everything looked OK, all disks showed as GOOD, but then the dd copy failed with above msg. I'm sure the source raid set is in good condition. Any thoughts on what could be wrong here & how I may be able to recover it?
I get the following error message when I try to access all of the sections in Add/Remove Sofware;
"No package cache is available. The package list needs to be rebuilt. This should have been done by the backend automatically
failed to search names: failure: repodata/5374141db0e227497be1e7ced5f1c45dafbe2f1899874f961f 7c54478b112bb9-primary.sqlite.bz2 from updates: [Errno 256] No more mirrors to try[
I've looked today on my logs /var/log/message and I find device eth0 entered promiscuous mode I don't remember putting eth0 in promiscuous mode I'm connected to the net thru a router how do i turn that off ?
I have fedora 10 and it has been working fine for over 6 months. Today I did the system update. After the update, I rebooted the system. Now, it hangs displaying the message "Not cloning cgroup for unused subsystem ns".
I cloned F14 with Clonezilla from 80GB to 320GB hdd(both sata disks), and then resized the partitions with GParted.But I can not boot into fedora on the new/bigger disk, it stops and the display writes "Loading stage 1.5" if I remember corectly,I tried to fix it with the live cd but with no efect.
Then i found Super Grub Disk live CD, and with that i tried to use their fix, which was the same as with the Fedora live cd i tried before, again no efect.Then i played around with Super Grub, and found the option to boot GNU/Linux indirectly, and with that metod i got results, found my menu.lst file and chose the kernel i wanted and it boots into desktop.
But i would need a more permanent solution, because now i allways have to use the same procedure with Super Grub Disk CD to boot into my Fedora 14.
Has anyone found a good cloning program like pq magic etc. I looked at clonezilla but could not find a dload for fedora/redhat (would installing it via tarball work?)
I have an old Athlon XP 3000 machine that I keep around as a file server.It's currently got three 1TB drives which I had setup as mdadm raid 5 on FC10. The machine's original drive held the superblock for the raid array and it just had a massive heart attack. I've searched, my biggest source being URL...I can't tell if I can reassemble the superblock info lost with the original hard drive or if I've lost it all...
What are the bare minimum configuration files that would be needed to rebuild a RHEL server?We are thinking about creating a generic base image and then just copying over the necessary files (fstab, hosts, networking, etc) to get a failed system back up and running in the least amount of time possible. I am fairly new to Linux and have suggested that we have a share on a redundant server that is /server_configs/Svr_name/*.* (names are subject to change and *.* would be all of the pertinent config files to make a fresh build customized enough to emulate the failed server).Is this even possible and/or plausible?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
5.10 Breezy configured as machine controller. Works great eth0 is a fixed IP to communicate with controller comms board. Not easy at all to alter - the comms board is hard coded to listen on eth0 for commands.
I can use eth1 as the default gateway and ping google.com, etc. But when I now attempt to communicate with the controller with netcat, e.g.
Code: echo !HH | nc 192.168.1.6 80
I obviously never get an answer since the request is passed via eth1. Using the -g option with netcat doesn't work either. I had a look at iptables but it doesn't seem to be able to do what I want. How I can still use eth0 as my communication port to the controller whilst eth1 is the default gateway?