Fedora :: Unable To Get Enough Cycles To Terminate Anything?
Mar 3, 2011
I have a machine running Fedora 14 and a bunch of movies stored on it as '.iso' images. It is connected to my home theater. I used to use VLC to play these movies and it worked great for a long time. About 4 months ago, about an hour or hour-and-a-half into a movie, the audio suddenly "disappears" and the machine goes CRAZY with almost 100% of the CPU spent writing to the SWAP. At first I thought the machine was locked, but it isn't; it's so doggone busy writing swap. I am unable to get enough cycles to terminate anything. I found the swap activity through System Monitor - it was ABRUPT. Sound stopped, machine became preoccupied with swap.
I have removed/reinstalled VLC, the machine has undergone a couple kernel updates, and I have removed/reinstalled a number of things associated with audio (CD ripper, mpeg stuff, etc.) yet the problem persists. I don't know what happened or when (update-wise). Any body got any ideas? While a solution would be great, I'd also be happy with a couple decent suggestions on what to look for.
In Fedora 13 64 bit, Ctrl+C does not terminate the running programing in terminal window but in Unbuntu this shortcut key works. If I hit Ctrl+Z, this makes the running program run in background which is something I definitely dont want. what is the shortcut for terminating a program in terminal window? What is the shortcut key for canceling the command I have typed but not run yet no matter where the cursor is in the command ? Ctrl+U works but only if cursor is at the last character of the command.
Just now Rhythmbox has stopped working. I can't restart it, so I thought I could kill the process to start it again. Is there something like the windows task manager in openSUSE or another way to list all processes and to kill one? I googled an found a few old threads saying that there is a performance monitor which is able to to that, but I can't find that either.
done so the script waits in the loop for the subshell to finish then does processing and starts over. If I kill the script the subshell is not killed. So I can trap the TERM event and do some cleaning but I need to know the subshell process id.
I'm using busybox so ps does not accept any parameters I cannot source subshell so it can access its parent environment
I've somehow got it into my head that it's possible to share CPU cycles, though I've no idea where from.So basically that's what I'm asking - is it actually possible to tell one system to 'donate' it's unused CPU time, cycles, whatever they are, for another's use?
I have an issue with my web server. We are running RedHat Enterprise Linux 3.0 with Apache 2.0 and Tomcat 5.5. The situation has arisen that the httpd sessions never terminate. New connections continue and continue to be created and never die. I have restarted the apache services to reset the connection and have even rebooted the server, however, to no avail. Yes, that does the trick of getting the web sites operational, however, this is not a solution.
I have searched and searched here, www.google.com/linux to no avail. I have looked through the apache.org bug tracker and can't find anything like what I am experiencing. This happened 6 months ago and I got lucky and it stopped, however, the situation has resurfaced. I have reviewed the logs and found nothing that provides any insight.
During the business day, the number of httpd connections continue to grow and I decided to let it see how high it would get before the web sites would crash. That magical number is 203. Now that it is later in the evening and about 2 hours since I restarted the httpd services, I now only have 59 connections. However, I'm fairly certain based upon the traffic on these 2 websites, that in the evening, there aren't many connections after 2000 hours.
I have a couple of question regarding the screen blanker on Gnome desktop.I used to use a 1024x768 display with previous openSUSE distribution. With 11.3, I discovered the new "auto-configure" X feature. The default screen mode was 1600x1200, but I changed it to 1280x1024.My gfx board is a Matrox G400 DH. Hardware acceleration is disable because of a missing mga_dri.so (fall back to software rendering).I find some screen blanker modules are using almost all the CPU cycles. Animations are very slow, and it can take long before a keyboard hit or mouse movement makes to leave the blanker.
So the questions:- Is there a way to define another (smaller) screen resolution just for the blanker ?- Who should I try to convince to add mga_dri.so back in Mesa again ?- When the monitor go to sleep (DPMS), the blanker still is running, uselessly consuming CPU times. (I can see that because at the first mouse/kbd event, the monitor wake up and shows the blanker running.) Is there a way to configure the blanker to stop running when the monitor is sleeping ?- There are some modules which load images from HD (not the diaporama which load images in ~/images). But the shown image is always the default built-in one. Where is the blanker trying to load images from ?
I am running open suse 11.3 and keep up on maintenance. Ever since upgrading to 11.3 I find that the number of cpu cycles is being eaten for apparently nothing. In looking at the system monitor I frequently find that Xorg is using 24% and frequently more than that. What can be done to reduce the cpu cycles, or fix the problem?
I'm using tigthvnc server on linux machine. Often my clients are closing their vncviewrs from close button ('X'), and not exiting gracefully their sessions from OS. How can I terminate the server when they do that?
I have a script to establish a reverse tunnel with other machine,My problem is to stop the tunnel. If I just kill the PID at sshtunnel.pids, ssh does not release the ports at the server side, so any new connection will fail for several minutes.Is there any way to signal SSH to exit gracefully?
I have a new UEC (Ubuntu 9.10) server up and running. I'm running a self contained solution so the cluster and node are on the same machine. I know this isn't ideal but I only have one server. I followed https:[url]... and https:[url].... When I try to run a VM (Ubuntu 9.10 amd64) image it will go from pending to shut-down to terminated. I know others have had this problem but I haven't see any solutions. I'm hoping there might be one out there that I've missed. I'm running on an AMD 64-bit quad core with 8GB DDR3 RAM.I am not seeing any errors in the logs.
"Servers aren't meant to have GUIs because they are a serious waste of CPU clock cycles." I encountered this line from somewhere here in ubuntu forums, but how could I install tools like mysql wrokbench and stuff, w/c will make my life a lot easier as an administrator? or is there such thing as remote administration?
I'm planning on setting up a new Linux box expressly for distributed computing (BOINC, SETI@home, etc.). All things being equal, what's better- More clock cycles or more cores?
I have 2 nvidia gfx cards, if I use the nvidia settings tool to view 2 monitors with the same gfx card works fine, when i use one gfx card for one monitor and the other gfx card for the other monitor X doesnt start and just cycles through monitor flashes.
I was running 10.04 until yesterday, when it occured to me that I could upgrade to 10.10. So I went to Software Center, set it to get normal releases and left it to do its job. The upgrade appeared to go without a hitch and I rebooted. The login screen appeared. But just before I could click on my username and enter my password, the screen went blank and a second later the login screen was back. But then just before I could click... Undeterred, after half a minute of frantic clicking I did manage to click on my username and get the password prompt. This time, the login screen didn't go anywhere. Yay. To cut a long story short, this is now my standard logon procedure. However, the plot thickens. I appears that instead of 10.10, I ended up with 11.04, Natty Narwhal, which 'was released in April 2011'. If I download an .iso of Maverick and install it over my current version, will it leave my data unharmed AND reset everything so that it works again?
I am having an issue with HTTPS certification using curl. My curl is configured with OpenSSL. If the certification verification is failed I dont want to terminate the operation, instead I want to continue by just putting a log message. For this I have used OpenSSL SSL_CTX_set_verify() function to set my static C callback function. During HTTPS transaction, my callback is also getting called with first parameter 0 or 1 (depending upon of the certificate verification is success or failure). But even if my certification verification is failure, I want to continue. So I have hard coded to return value as 1 always from my callback function. But still I see the certification error and I don't get the page.
Just trying to figure out some stuff with a broken process. A java app seems to sometime get stuck on a loop or something and i'm trying to find out what's causing it using just the following #sysadmin tools at my disposal.
Things like:- htop - find the PID thats causing the High CPU cycles. I'd then want to use /proc/[PID] or lsof -p [PID] or strace [PID] etc. But the PID doesn't exist in 'ps -ef' output so I think htop must be showing me kernel level thread PIDs?
Not sure about the PID's HTOP is actually giving me? I know that some of them are the real PID's that can be accessed through /proc/[pid] etc but others are not but are i assume child processes or more likely threads as child processes are normally shown in a default ps output anyway.
Is someone able to help distinguish to me about what exactly all these other PID's are that I can't manipulate or find apart from when using HTOP.
I have problem with sshd server, its authenticate user and then terminate the session. Here is debug log: Jan 1 04:26:41 server sshd[29677]: debug1: userauth-request for user root service ssh-connection method none Jan 1 04:26:41 server sshd[29677]: debug1: attempt 0 failures 0 Jan 1 04:26:43 server sshd[29677]: debug1: userauth-request for user root service ssh-connection method password Jan 1 04:26:43 server sshd[29677]: debug1: attempt 1 failures 0 Jan 1 04:26:43 server sshd[29676]: Accepted password for root from xx.xx.xx.xxx port 50971 ssh2 Jan 1 04:26:43 server sshd[29676]: debug1: monitor_child_preauth: root has been authenticated by privileged process .....
changed terminal into raw modecfmakeraw(&termios);After that terminal no more captures CTRL+CIs there a way to enable CTRL+C (to terminate the program) while still have RAW mode?
x is a variable which is taken from a very beg text file > 64MB
first line of my code is cout<<" Wait Running...";
my code takes text file as an input, takes its data and generates an output text file....
Code is running fine for small data tried till x= 10
but while trying to run with large data ie x = 5000000 approx it is giving error Even the first line of the code is not displayed. NOTE: variable is declared global but its size is defined in main.
The error that i am getting after approx 2-3 minutes is:
terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Wait Running...Aborted (core dumped)
I am running KDE 4.4.4 Stable from the openSuSE Repository for KDE 4.4.4 stable as included with 11.3.At first after initial installation Kontact worked and I could access Email, Address Book, Calendar, etc. from the icon bar. Now, Kontact starts, opens the Summary page, but no selections respond, only the Kwin responds to terminate the application, it just seems to hang and has to be killed to terminate. Anyone have similar issue or ideas on what may be causing it? I can start KMail, Kaddressbook, as separate applications and all respond as normal.
I've been struggling with suspend to disk (hibernate if you prefer) for a while, it works after a fresh boot and for several days' worth of overnight hibernation as I go about my work, but eventually it stops working - it gets to the splash screen but the bar only makes it a little way to the left before stopping, and then after a timeout the system just returns to the "session locked" screen - no real error messages.
I've done my best to try to find out what's causing it to break but I'm really struggling, the suspend process doesn't appear to write anything helpful to the dmesg log or the /var/log/pm-suspend.log - the only thing that I've seen at about the right point in time is cifsd, but I can't be sure that it's a problem with cifs as hibernate continues to work immediately after mounting windows shares with cifs.
when i type the command ssh node2@ip the terminal hangs up a bit then an error message stating that connection timeout but here is a thing: i cant ping to node2 but i can terminate the ping manually using ctrl+c when i terminate the ping usual message appears stating 10 packets transmitted, 100% loss ps: when i go to node3 and ssh to node2 it works fine and also i can ping from node3 to node2 very fine. and the firewalls are down at all nodes
I have a problem in Ubuntu 10.04. The bug is well know which makes the hard disk head to park often (2-3 times per minute) that's dangerous for the drive in the long term and annoying for me (click-click-click).I found out the "ugly-fix" for old Ubuntu version which was :hdparm -B 254 /dev/sda instead of 128. It works. The problem is that it doesn't remains (when restart/standby/ac connection-disconnection).I found a script well known too :Code:1) make a file named "99-hdd-spin-fix.sh". The important thing is starting with "99".2) make sure the file contains the following 2 lines (fix it if you have PATA HDD):
#!/bin/sh hdparm -B 255 /dev/sda 3) copy this file to 3 locations:
I have a BASH script which at one point asks the user a yes/no question. I want to make it so that if the user types in an invalid input 3 times consecutively then the BASH script will echo an error and terminate with exit status 1.
Every account every option I try. when I login, it just cycles back to the login screen. I have attempted to do a repair install, but to no avail. it happens when I try to boot normally or if I boot into failsafe.
I've got Ubuntu One syncing a single 25MB folder on 4 computers. On one of these computers, the ubuntuone-syncdaemon process constantly pegs the CPU, using from 50-80% long after any sync-able files have been modified and successfully synced. The process is only using 8.9MB of RAM.
Specs: Ubuntu 10.04 (lucid) Kernel 2.6.32-24-generic 1000.8 MB RAM Pentium 4 2.53GHz Free disk space: 280.9 GB System monitor shows 56.8% total RAM usage, 15.4% swap file usage.
I just switched from Ubuntu to Fedora 13 because I was unable to get Ubuntu to connect to wireless networks. I tried everything suggested in help and forums, and kept getting "Bad Password" with WICD and Network Manager. Now, with Fedora...I still can't connect.
Problem #1: The guide says to "...make sure that the relevant wireless interface (usually eth0 or eth1) is controlled by NetworkManager," and that I do this via: System>Administration>Network
However, there is no Network option under System>Administration.
Problem #2: I open Network Manager, which displays a list of networks. I click on mine, configure it with WPA and the right password, and it fails to connect: "The network connection has been disconnected."