Software :: 2nd Node Tried To Take Over Resources But Failed
Sep 9, 2010
On a HA Cluster, the 2nd node noticed that the 1st node was down, but the first node wasn't down. This resulted in that the 2nd node tried to take over the resources but failed, because the resource was stil in use by the first node. This caused that the first node was left behind in a fuzzy state. I had no other choice to kill the heartbeat service, and reboot the server to solve the issue. There where no network issues, all hardware is ok. Are there any bugs know? Is there a way to avoid it from happening again?
I don't have much experience in clustering. And I'm deploying a cluster system on CentOS.But I don't know how long a node failover and another node take over those resouces to continue running service is good, fast or slow? 1s, 10s or
I have created a simple menu driven script for our Operations to take care of the basic monitoring and managing of our production application from the back-end. Now, the script when tested in UAT environment was fine, but when deployed to production it kind of behaved oddly.hen the Operator chooses an option from the menu he is given the output and at the end is prompted to return to the main menu by using ctrl+c. In production, this return does not occur for some strange reason and the program just sits there.The session becomes unresponsive after that and I'm forced to terminated it by closing the PuTTY.I tried enabling the debug mode too (set -x) and still was not able to find any useful hints/trails as to why.
So far, I can ping a virtual ip, and manually relocate it between the nodes, but I didn't figure out, how to do this automatically. So this is my question: How can I setup the cluster, to it automatically failover the a service to another node case one node fails?
I have a OpenSuSE 11.1 box that is running mysql and apache. My database is only about a megabyte in size and I only have a few users per day on my site. How is it that with 8GB of RAM over 5GB is being used?
Does anyone know of any decent web guides that will help me set up a DNS service running on 127.0.0.1 that will automatically forward requests like [url] to httpd running on 127.0.0.1 yet forwards requests to [url]to a 'proper' DNS service?
3861 user 20 0 904m 128m 33m S 0.7 6.4 1:11.52 xulrunner-bin 1323 user 20 0 1555m 95m 31m S 13.5 4.8 4:06.87 gnome-shell 3494 user 20 0 1028m 50m 21m S 12.8 2.5 1:43.32 evolution
I just wondering what is the difference between RES, SHR, and VIRT.
1) The VIRT always seems to be higher. Is this using the paging file system. (virtual memory on the harddisk, the swap memory)
2) Is the RES memory the actual physical RAM memory?
3) Is shared memory sharing memory with other processes?
4) Just a final question. As I am running on a HP Mini 210, memory and CPU is a resource I don't have a abundence of. So if was to compare for example 2 difference browsers i.e. firefox and midora. What should I brench mark between to 2 to find what one uses less resources?
I got a fairly powerful media system standing in the living room. Most of the time it is idle, maybe playing some music. At the same time I got a pc in my office that is less resourceful. Sometimes I'll run programs through ssh with the -XC command. This way a program will seemingly run on my office pc, while it is actually running on the media system. However:
- 1 Sound doesn't transfer to the office pc.
- 2 When using a program I need to constantly be aware where I save my files. Sometimes it is located on the remote computer, sometimes it isn't.
Is there an alternative way to use the resources on the media system? E.g. run a program that is stored on the office pc and process it on the media system?
Both systems run Ubuntu (office 10.04 - media system 10.10).
I just recently upgraded to 10.04 (LTS), and I have been using stardict app for a long time with nothing but success on each of my laptops anc omputers I have had over the past 5 or so years. However, now on one laptop after starting the stardict app, using panel the applet I create on my xfce4-panel (as per normal), and looking up a word (highlight and press shift key which i use for quick word search using stardict), the app begins using 100% cpu resources and I am forced to kill the process.
The problem is intermittent, meaning it only occurs half of the time I am using startdict, while the other half of the time it operates normally. I have been trying to see if there is some sort of pattern, perhaps other apps running at the time the problem occurs, but I have not noticed anything that looks suspect, since I generally fire up stardict after boot in and have had it happen (the problem) without even any other apps open (just testing to try and find some sort of pattern to locate the potential issue).
I use a standard set of dictionary files for each of my comps, and currently have stardict operating fine on 2 other laptops, both running 10.04, with the same set-up (xubu, same apps, and settings pretty much) and no indications of this problem on either (other) laptop.
I installed ubuntu 10.10 a few days ago which ran very well on the live CD but after I upgraded to 11.04 Ubuntu uses just under 100% of my cpu after a few mins loged in and my ram usage increases by about 5MB per sec starting the second I log in. I am using Classic Gnome and it seems to do this wether I have metacity or compiz turned on. Does anyone know what is going on or know about a way to lower ether my cpu or ram usage
I have a rhel cluster with two nodes. cluster is working fine.But suddunly for today morning im not able to ssh to once of the nodes with follwing error.ebug2: we sent a hostbased packet, wait for replyConnection closed by 10.125.104.162After some not able with in the node and after some time second node also started behaving like this. Now ssh with in the nodes and between nodes is not happening. But i am able to putty session.Error from /var/log/messages.vsftpd(pam_unix): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=
after few days of hard work about redhat cluster and piranah, i have done 90% hopefully,but i am stuck with iptables rulesi am attaching full piranah server screen shot of my network .please have a look and please tel me, what else to do in piranah server ...a) from firewall (Lynksys) what ip shall i forward port 80 ?? ( 192.168.1.66 or 192.168.1.50 ??)b) Currently its looks like http request is not forwarding from Virtual server to real server , what iptables rules shall i write ?(Please have a look to iptables rules)also, this link for my piranah server setupi guess i am stuck somewhere where i need some experts eye to catch it upso please look at the all the pictures , ifconfig and iptables rulesifconfig :
ifconfig eth0 Link encap:Ethernet HWaddr 00:0F:3D:CB:0A:8C inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0
I have a problem with kmix, which uses 100% of my processor without stopping. And this is providing to rise of temperature above 80 degrees Celsius. The problem became actual after one of updates of 11.4 series.
I don't know if anyone has used Damn Small Linux (dsl), but they've got a cool little system resource monitor builtin to the desktop on the latest version-- Not the old wmnet and wmcpu that they had before, and not the ones the come with ubuntu-- Attached is a screenshot, it's in the upper right corner outlined in red .
Was wondering if anyone knew the app that this was-- I checked their packages list and didn't see it, I don't know if they modified wmnet/wmcpu or something, but I think it'd be cool to have this on my desktop.Especially if it's from DSL, because that is a tiny distro that runs fast, it can't take up too much room.
I've been using 11.04 for a few weeks now on my laptop. All of a sudden this morning, when I boot, it tells me that my laptop doesn't have enough resources to run Unity, although I've been using it the whole time. It's dropped me back to the old interface. Does anyone know why it did this?
i have 2 front ends that receive traffic (http server) and should run some scripts in crontab, some of the scripts should just being running by 1 server at a time (active one) and others should run on both. Regarding the http like is load-sharing i think i cant use heartbeat, right? heartbeat is just for active-stanby or can we use to a active-active as watchdog? i have a cisco css to load sharing the http, and i can make a watchdog script to the apache. Regarding the cron crontrol i was thinking to make a script that replaces the crontab file to whatever is the correct one.
When the heartbeat start what parameter is sent to the script that are resources? a start if active node and nothing if is the standby?allways start?how should i config the haresources to do it? what is the best way? i have other situation that is making a nfs server in solaris 10, i have 2 servers with shared disks ( sun array), can i use heartbeat to this too? it is possible to make it in such way that if i had i failover in nfs server the clients doesn't need to reconnect?
For some reason after the computer is on and idle for a little while, xorg starts using all available CPU resources. When I come back to use the computer, the screensaver will either be running nice and smooth, or the screen will be black. Either way, I won't be able to get back into X. I'll ctrl+alt+F1 into the console (or ssh if it's still not responsive) and "top" shows me xorg is hogging all the available CPU. I'll kill -3 <xorg> and it the computer will come back to the kdm login screen. What the %^&*??? It's a little irritating. I'm using Bouncing Cows and Seti@home when idle, but both do their job and close when they detect a key press. Even via ssh when I already tried waking up the computer with a key press, it shows the screensaver closed and seti is idle.
I have Debian/Lenny installed with all the latest updates and ATI's driver installed with direct rendering enabled (350fps for the cows, >2000fps for glxgears). it's a Dell Inspiron 8600, 1.8GHz, 2GB Ram, 80GB HD, ATI 9600pro/128MB.
Few months ago I'd got problem with Apache on my server. Some requests required few gigabytes of memory, much more than was avaliable. I've set ulimit -v XXX and this fixed problem for single request.
I still have got problem with multiples requests. I'm using mod perl, which runs few processes for requests, we can assume, that there is one process per request. ulimit cannot handle this, because it can only limit virtual memory per process, not for a group of processes.
Is it possible to prioritize internet resources for a particular application or a set of applications and control their maximum bandwidth, etc ?
I've a download manager and a bittorrent client where I want to prioritize resources for the bittorrent client followed by the download manager but I want them to collectively have a speed less than a manually defined max value since even the other apps need some resources.
I am not an expert and have only been managing my server not too long. My server is running kind of slow so someone suggested running the 'top' command via shell, and I found a few things using major resources, but I don't know what they are or how to fix them. can someone suggest some things.