I configured it to use IMAP to access our exchange 2010 server front end on a LAN connection. Our webmail connection is segregated behind Forefront, so it was not connecting/authing that way. Even though smartphones have no problem. (sidenote, is there an activesync linux mail client ?)
I have many root folders and several folders underneath my inbox. total mailsize in inbox is 3.5 GB without subfolders. The sent is likewise as large. And is likewise empty.
Things I checked already: View is all Folder subscription is on and local copy is on
More info: Thunderbird worky fine. But thunderbird is missing calender Tried adding lightening, but it won't add into thunderbird. Will try finding a diff add on, or if anyone knows how to get lightening into thunderbird 3.1.8 on ubuntu 10 that would be great as well.
I had this error when installing and running a vncserver before, which I have now removed. However, the xterm's seem to remain in the system and are regenerating themselves. Should the pid IDs stay the same each time I run this?
I need to create a small list of processes in a monitor.conf file. A shell script needs to check the status of these processes and restart if they are down. This shell script needs to be run every couple of minutes.
The output of the shell script needs to be recorded in a log file.
So far I have created a blank monitor.conf file. I have gotten the shell script to automatically updated every couple of minutes The Shell script also sends some default test information to the log file.
how I go about doing this part ? A shell script needs to check the status of these processes and restart if they are down.
I have put in the conf file the below commands but I am not sure if this is right.
ps ax | grep httpd ps ax | grep apache
I also dont know if the shell script should read from the conf file or if the conf file should send information to the shell script file.
I know that you can modify the nice value of a particular process as follows:renice 19 -p 4567However, now I would be interested to set the renice value of ALL active processes.I am coming from the Win world so what I tried warenice 19 -p *Of course it is not working... Anyone a quick solution how to do that in Linux?
I would like to get a log of all processes that are launched with the time that they were launched and the arguments they were launched with. Is this possible in Linux?
I am writing a code which communicates between 2 processes created by fork() statement. Parent reads a file and write the data into a shared memory and sends a signal to the child. The child then receives a signal from the parent to start reading. After finishing the read operation the child sends a signal to the parent asking it to resume its action. Some things are going wrong in my code.
1. segmentation error in memcpy() statement. 2. terminal hangs after running the code. 3. Synchronization problem between processes..
I have a question. I want to monitor - CPU usage daily - RAM usage daily - Harddisk Space - top processes - hardware failure
What commands do I need to run to output the result to a log file? I know there are solutions both paid and free, but my company does not allow. they want linux built in commands or methods to do it. I do not know bash scripting. I know some commands like "df -h" to monitor harddisk space but not sure on the other stuffs.
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
This script puts a natural number 5 times a second.
3. Then in the second bash window I type (as root):
Code:
The script test2 looks as follows:
Code:
While true; do true; done
During the following 15 seconds test2 is the process with the highest real-time priority. As far as I know the script doesn't perform any system calls so it shouldn't be suspended even for a minimal timeslice. My question is: why the process test1 manages to put a few numbers on the screen before test2 stops. I thought that test2 would exclusivly own the processor for 15 seconds.
that would show me at least any active ftp connects started with the ftp command, right? Is there then a way to use that to somehow kill any stuck sessions that are older than an hour?
I list all the instances of a running process my doing:ps -ef | grep myprogramThis lists all them.how can I simply output a count of how many are running?
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
What is the default nice value for processes?The setpriority() function sets the nice value of a process, all processes in a process group, or all processes for a specified user to the specified value. If the process is multi-threaded, the nice value affects all threads in the process. The default nice value is 0; lower nice values cause more favorable scheduling.
Can anyone explain to me why there are sometimes 10 or 15 processes with the same title and "stats" listed in htop? I'm guessing there are multiple threads running - but that many of them obviously couldn't be running concurrently.
Is there any sort of performance hit taken if a process uses say, 15 non-concurrent threads vs. 10 non-concurrent threads?
Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such?
I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains.
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?
I am developing a daemon that is acting up and I am now unable to create any new processes (ie. I cannot start a new process to kill the other rogue processes). So, I need to be able to kill the processes from a remote machine. How do I do "kill" remotely without admin privileges? If I cannot kill my own process from a remote machine as a normal user then tell me so I can mark it as the correct answer.
I have something like:cd project && python manage.py runserver &cd utilities && ./coffee_auto_compiler.pyAnd I want both of them to close on Ctrl-C (or some other command). How can I accomplish that?EDIT:I tried using jobs -x kill and kill `jobs -p `, but it doesn't seem to kill what I need. Here is what I mean:oon 8119 0.0 0.0 7556 3008 pts/0 S 13:17 0:00 /bin/bashmoon 8120 6.8 0.4 24568 18928 pts/0 S 13:17 0:00 python manage.py runserverjobs -p give me just process 8119, but I also need to close 8120, since it's the thing that the first command opened.If it helps, the commands are actually in a Makefile, and I want it to run two daemons at the same time (and somehow close them at the same time).
Using h allow me to hide the table header. Is there a way to tell ps to not print the pts/13 S+ 0:10 cmd part in order to get a list of children process ids separated by carriage return?
I have a linux server top reports about 9GB of swap used:But I cannot figure where's it use swap, some google results said that top - O commad follow by p will show swap usage by process. But as shown in the above image, taking a brief sum of the SWAP column shows that > 10GB of swap is used, so where does the 9GB figure for swap usage come from? Top reports that about 96492kb of ram is used by buffers. Is there anything I can do to utilize this, instead of using swap?
All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of: "System is trying to swap 8GB of RAM to disk because process X ..." or "process X seeks all over the disk" or "process X uses 400% CPU"
So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth
I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process" but is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle".
My argument goes like this: User notices problem. There can be thousands of reasons ... well, almost. User wants to know source of problem. The current solutions give me lots of numbers and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)".
This will be a relatively short list. It will be much more simple for an newbie to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).