I have had a fresh install of Ubuntu 9.10 and installed some software after that.Since third some, some process is eating half of my memory.I have checked processes running in system manager but everything is normal.Maximum is consumed by compiz which is about 26 mb, seems very normal.I did restarted my computer several times, and in the start for 5 mins, its fine after that again my cpu fans runs at very fast speed and my one cpu is used up 95 % (I have dual core).Please help me out, this invisible thing is driving me crazy.I am attaching my htop screen shot (sorted by cpu %), now the cpu is not used by completely but fan is still struggling hard and fast.
Im using SUSE, i have 31GB of memory Mem: 31908592k total, 31429632k used, 478960k free, 12176k buffers. How do I find out what process are eating up all my memory.
If I have a centos linux server, how can I stop a user on the server from eating all the memory and swap space memory, maybe due to a poorly written script, infinite loop etc?
I have installed Debian Lenny today from netinst cd. It all went good, hen I installed some basic utilities, daemons and libraries like dbus, hal, libgtk2.0, libasound2, alsa-utils, htop (for checking system load). Then I installed X.org with commandaptitude install -R xserver-xorg xserver-xorg-input-mouse xserver-xorg-input-kbd xserver-xorg-video-intel xinit xterm twmI used -R to pull less dependencies. When I ran startx X Server started, and I got on twm. But when I checked on htop, my memory usage was 175MB, and before starting X Server it was only 25MB. Why is X Server using so much memory on Lenny? I also have Debian Sqeeze on a different partition, it uses so much memory with all gnome services running+ iceweasel and amsn.
I'm running several SHOUTcast server instances and a WowzaMediaServer instance on a CentOS machine. I'm experiencing a memory leak problem, but I can't figure out which processes are eating memory.
TOP command reports as follows:
[Code]...
Something misterious to me (I'm still a Linux newbie) is that TOP reports a total of 7.5GB used ram but very small percentage for single process (0-1%). Memory consumption starts at 1GB/8GB after reboot and in three days running gradually increases up to 8GB. I'm practising with Linux, but I still miss a lot to understand what's happening on my system. For instance, are there linux kernel logs saved somewhere that I can look at?
Xorg takes 700+ mbs of ram, then in matter of hours it fills the swap and then system basically stops responding or whatever. And because its constantly allocating, it degrades perforamce horribly.Interesting thing is I never had this problem before, recently one of my ram modules broke (2+2 GB) and now I have only one, but it still doesnt explain the memory overuse. Windows 7 works perfectly fine.
My server is keep on hanging So I have rebooted several times in the last couple of weeks, the system is eating more memory and the usage is keep on increasing and at particular time it became saturated and my server hungs. I could not find which process is eating more memory. I have used the below commands to check if any process is eating more memory but no luck. No such process are using high memory.
We had servers that worked fine for years. After updated them to the latest version of CentOS (5.2 with latest updates), they keeps on hanging when being scanned by PCI Verdors (a Credit Card security standard). Basically, the scan causes httpd process to eat up all memory, and the server becomes unresponsive. Normal operations resume after the scan stops for 5, 10 minutes. Output from top looks like the following:
I am running SuSE 11.3 ( 2.6.34.7-0.7-desktop) on a Dell Laptop I am using an external NAS (QNAP-809pro) that connects to the laptop via iSCSI When my laptop boots I get an error that stops the boot process and gives me the filesystem repair terminal: ther I have to comment out the iSCSI lines from /etc/fstab and reboot normally. This is my fstab with commented-out iscsi mount lines
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
The first server I installed installed fine. The second server, installed with the same config, went to "kernel panic not syncing no init found try passing init= option in kernel" error. I tried reinstalling but it keeps going to that error after install reboot. The storage is ISCSI connected via Intel Server Adapter, which allows it to boot from ISCSI. Not sure if that's the cause for the problem, but the first server is connected to the same ISCSI and installed just fine.
Is there a way that I can make sure ISCSI module installs during installation? Although I think it is installed since it's able to copy the files and setup /dev/sda. I just wana make sure that it installs during setup.
I have never done this before but stood up a new Debian (testing) x64 system. It only has 146GB available for RAID1 so I created a 500GB iSCSI LUN on my NAS device on the network and am really confused how I can attach my Debian to the iSCSI LUN I created. Right now the OS is installed all locally on the machine but I would like the iSCSI LUN to be the /home directory for mail storage. Is this possible or do I need to mount the LUN to a newly created folder / mount point that is locally attached?
I seem to get logged into the NetApp filer fine - it shows the session, but I don't get a device. And don't panic - the system is CentOS 5.4, but the system is named OpenVMS (don't ask . . . ) iscsiadm shows me logged into the NetApp and the session is up.
I've been trying to figure out how to set up a partition on my CentOS serve r as an iscsi target, so I can access it from another CentOS client.I've been reading the manuals and various pages on the web, and nothing is very clear. I just want to be able to create a partition on my server, define it as a target, and then have my client initiator mount it.
i have installed centOS5. i want to configure iscsitarget. i know that iscsi target is built-in in centOS5. how can i use iscsiadm utility to configure iscsi target. please describe in detailed.
1.) i want to make 4 drives (4Drives OR 4LUNs)in iscsi target each with 5GB space. 2.) please describe and highlight where we mention disk space(5GB)
I have an iSCSI device that is large, ~17TB usable, and I've created an XFS file system at the device level. I mount the device no problem and am able to touch and remove files without issue. I start my application and within about 5-10 minutes I start seeing the following entries appear in /var/log/messages
I built a CentOS 5 Linux cluster with GFS storage using local RAID volume and share it with gnbd_export/import on two web-servers. Now I need to expand that storage to another servers local volume. I saw the picture in the manual, but I don`t know, how can I create that scheme.
I can use gnbd_export on the second server and gnbd_import on the first. In that case I will have two volumes on the first storage and I can expand volume group, logical volume, etc on it.
I have a server wich is connected to an iSCSI storage and gets harddisks from this storage. Sometimes I have to add new disks to this server. Everytime I add a disk and make an /etc/init.d/iscsi restart on the server the disks don't have the same device name as before the iscsi restart.
It should be possible to gave the disks persistent names using udev rules. Now I tried to create different rules in "/etc/udev/rules.d/99-static-iscsi-names.rules" e.g.
# /dev/sdc KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s %p",RESULT=="360a98000503355344c4a576864467877" NAME=="sdc%n" In "/etc/rc.local" I added "/sbin/start_udev" and in "/etc/scsi_id.config" I added the line "vendor="NETAPP",model="LUN",options=-g"
I have a fairly simple iSCSI setup using two devices, but they have swapped names on different machines. running CentOs 5.3 ia64, and using iscsi-initiator-utils-6.2.0.868-0.18.el5
vm1: [root@vm1 ~]# fdisk -l Disk /dev/xvda: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
[Code].....
Any way to get iSCSI to mount the devices as consistent device names ?
I am trying to install iscsi-target on centos 5.4 i386. When I go into the iscsi-target directory and type make I get the following error: [root@cluster-storage iscsitarget-1.4.19]# make cc: /lib/modules/2.6.18-164.el5/build/include/linux/version.h: Datei oder Verzeichnis nicht gefunden cc: keine Eingabedateien /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -le: unary operator expected /bin/sh: line 0: [: -lt: unary operator expected make -C usr make[1]: Entering directory `/opt/iscsitarget-1.4.19/usr' ..... make: *** [kernel] Fehler 2
I'm trying to configure an ISCSI/DRBD high-availability cluster and I'd like to know what is the best option between OpenAIS and Heartbeat. I've seen they both are included in Centos Repos, yet OpenAIS requires installing 2 addition repos to install Pacemaker (EPEL and Clusterlab repos).
We have a Centos 5.6 server mounting two iSCSI volumes from an HP P2000 storage array. Multipathd is also running, and this has been working well for the few months we have been using it. The two volumes presented to the server were created in LVM, and worked without problem.We had a requirement to reboot the server, and now the iSCSI volumes will no longer mount. From what I can tell, the iSCSI connection is working ok, as I can see the correct sessions, and if I run 'fdisk -l' I can see the iSCSI block devices, but the O/S isn't seeing the filesystems. Any LVM command does not show the volumes at all, and 'vgchange -a y' only lists the local boot volume, not the iSCSI volumes. My concern is that, the output of 'fdisk -l' says 'Disk /dev/xxx doesn't contain a valid partition table' for all the iSCSI devices. Research shows that performing the vgchange -a y command should automatically mount any VG's that aren't showing, but it doesn't work.
There's a lot of data on these iSCSI volumes, and I'm no LVM expert. I've read that some have had problems where LVM starts before iSCSI and things get a bit messed up, but I don't know if this is the case here (I can't tell), but if there's a way of switching this round that might help, I'm prepared to give it a go.There was absolutely no indication there were any problems with these volumes, so corruption is highly unlikely.
I have been assigned a school project on detecting memory leaks in linux processes. I am reading.. but have found it hard and inefficient to go through the very vast documentation not knowing what to really look for. Could you please give me some guidelines on this subject?
Is there any command to get the memory utilization of a particular process in Linux?I tried with Top and /proc/pid/status commands but the results are not proper, the memory keeps on increasing.Can anyone tell other than Top and /proc/pid/status commands ?
I need to know, what process for what purpose is using memory in my machine. ps utility with various options seems to give not exactly what i want, i.e. if i sum all the values like RSS, VSZ or some other values related to memory usage, the sum is not equal (even approximately) to what i get using free|grep "buffers/cache". How can i get this information? Even better, i would like to see contribution of every process, ramdisk, etc. to memory usage.