CentOS 5 :: Memory Leak Bug On Production Server
Mar 20, 2009
My server [URL] goes down in some cases.
On general time load average is 0,15-0,2 Only http(with php) and mysql working on it.
Here are some screens when the server goes down: [URL]
Here are the same bug with mine described: [URL]
View 5 Replies
ADVERTISEMENT
Jan 24, 2011
root@XXXXX:~# uname -a Linux myserver 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:40:24 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux I'm having serious memory leak issues on a server running CentOS. Running 'top' I can't find any proccess with an unusual memory usage. Is there any other way to check what is using this memory? Right now it shows that I have 4.8GB of RAM used, but top shows only few proccesses, one with 4% and lots of 0.0%.
View 9 Replies
View Related
Jan 4, 2010
I updated several packages on one of my servers on Dec 21st and have been seeing excessive swapfile usage since then. The problem process seems to be httpd which in our environment runs a subversion server as well as serving a number of php pages over https. At present I am having to bounce apache approximately every 5 days as it has used all 8GB swap in that time.
[Code]..
Of the updates listed as installed, the only one that looks likely to affect apache is glibc. Looking at the stats from sar -r I can see swap usage increasing by approx 3% (of 8GB) every hour.
View 3 Replies
View Related
Jun 3, 2010
I am facing problem of Memory leak whenever i try to connect the Server using the ().getting rid of this problem.
View 1 Replies
View Related
Mar 2, 2011
I'm afraid I have a huge issue with my newest Fedora 14 server. I recently migrated to Fedora 14 from Centos 5, which was very stable, but had ancient packages and libraries and my users were revolting...The machine is a HP ProLiant 360 G7, with 12G RAM and 6 SAS drives in RAID 5.After I migrated to Fedora 14, I noticed that for some reason, during the course of about 24 hours, all usable RAM "disappears" and applications are forced down to swap space. Needless to say I didn't have this issue on CentOS.
The server does heavy IO as per it's function (it's a heavily loaded file processing server and user simulation computing station among other things, which causes lots of random IO), so I thought it may be the cache, but then I realized it cannot be - because obviously Linux will use onyl "unused" RAM for caching and frees it up as soon as an app need it. Then, I thought to check the "slabtop" to see what's going on in Kernel memory. Unfortunately I don't have the screenshot from the time just before the latest crash, but there's a certain value displayed by slabtop, which slowly, byte-from-byte creeps over all available RAM, eventually forcing applications down to the swap. This is malloc-64, and as you can see from the bellow copy-paste, it's building up again even now...
Code:
Active / Total Objects (% used) : 9118075 / 9153600 (99.6%)
Active / Total Slabs (% used) : 152157 / 152157 (100.0%)
[code]...
View 4 Replies
View Related
Jun 3, 2010
I am facing problem of Memory leak whenever i try to connect the Server using the ().
View 2 Replies
View Related
Jan 29, 2011
I have had a fresh install of Ubuntu 9.10 and installed some software after that.Since third some, some process is eating half of my memory.I have checked processes running in system manager but everything is normal.Maximum is consumed by compiz which is about 26 mb, seems very normal.I did restarted my computer several times, and in the start for 5 mins, its fine after that again my cpu fans runs at very fast speed and my one cpu is used up 95 % (I have dual core).Please help me out, this invisible thing is driving me crazy.I am attaching my htop screen shot (sorted by cpu %), now the cpu is not used by completely but fan is still struggling hard and fast.
View 9 Replies
View Related
Mar 3, 2011
I'm currently developping a program in C++, using Qt, for an embedded board (SBC9261).It works well but crashes after some time, due to a system memory overload (my program uses more and more memory until 100%, when it crashes).I've been able to figure out the source of the memory leak :The function f is called by my program every second. f instanciates a new object (a QImage from the Qt lib), does a bunch of processing on it, and returns it to the calling function :
Code:
QImage *MyClass::f(QString filename) {// Open image
QImage *image = new QImage(filename);
[code]....
View 5 Replies
View Related
Dec 16, 2010
I need to write a small program that eats away at availabe memory. I need to creaty a memory leak to test how other programs cope. I need to run this program on linux and see if the available memory is decreasing.
So I have done:
Code:
int main()
{
int *buffer;
while(1)
[Code]...
View 10 Replies
View Related
Feb 3, 2009
I have 4 gigs of ram and xorg goobles up 70+% in a matter of an hour or so. I'm running a java app with lots of 2D graphics, firefox, pidgin, and one other java app. I recently upgraded to KDE 4.2 and got dual ATI firepro 3700 cards running 3 22" widescreens. The memory gets eaten until my java app crashes.
I'm using the fglrx driver as it seems the radeonhd one doesn't recognize the firepro 3700. Can one only have a single KDE4 version on the machine. I would like to be able to swtch back and forth between 4.1 and 4.2 in order to troubleshoot without all the downloading and installing in yast.
View 2 Replies
View Related
Mar 5, 2010
Is there any link where i can get information about below?
Dirty memory
RSS
PSS
One more?
if a set of process are getting executed in a use case say 50 times. How do one know the memory leak for a particular process?
View 1 Replies
View Related
Oct 26, 2010
I have encountered this problem of memory usage is increasing as the during the my program is being run until 1Mb is left then it stays at that.A part of the program is this:
Code:
#define WRITE_BUF_SIZE (1024*1024)
void post_data(const void *data, unsigned long size)
[code]....
View 3 Replies
View Related
Mar 12, 2009
Is it possible to setup software RAID on a current production server.
If so how would I go about doing so?
View 2 Replies
View Related
May 15, 2010
I can confirm my machine's GLX version is 1.4, and I definitively get the slow down problem, however, after running in a terminal:
Code:
grep "object bytes" /sys/kernel/debug/dri/0/gem_objects
grep: /sys/kernel/debug/dri/0/gem_objects: No such file or directory
What am I doing wrong?
View 1 Replies
View Related
May 21, 2010
I dont know if this issue has already been solved, or if there are other threads dealing with it, but Im quite desperate with this bug that has been annoying me since I upgraded from 9.10 to 10.04.
Firs I thought it would be GEM related, but Im using a proprietary graphics card, so thats not the way to solve it. I experience the problems a memory leak provokes, mainly that, as time goes by with the laptop turned on, the system needs more resources to execute simple actions, and it freezes too many times. Plus IBUS will, suddenly, stop working (I still don't know if it is related or not), and, most of the times I log in, Notification Area 2.30.0 will load with errors.
View 3 Replies
View Related
Jun 12, 2011
I'm having some strange issues with apache. Time by time it segfaults, eats all available memory (including swap) and makes server non responding.Ubuntu Server 10.04.2 LTS
Some strange logs:
Jun 12 12:00:18 *** kernel: [40767.969443] apache2[7635]: segfault at 726f7272 ip 00007f13a31f3f16 sp 00007fff6f740ea0 error 4 in libapr-1.so.0.3.8[7f13a31d7000+35000]
[code]....
View 3 Replies
View Related
Nov 18, 2010
I installed Ubuntu 10.4 and everything works great "out of the box"! I didn't install any drivers.blem, a major one...When I type "top" right after a reboot, around 1GB of RAM is in use. After a few min it grows to 2GB, even 2.5GB for no reason. Luckily, my machine has 6GB of RAM but it's still a major issue for me.I read on another forum another user with the same problem on the same Lenovo machine and he solved it by installing the newest driver for his ATI device. I don't have ATI, but I have the impression that I can solve my problem the same way. Do you have any idea what driver my cause that? I checked the Intel site for a new Intel HD graphics driver but they don't have ones for Linux.
View 2 Replies
View Related
May 27, 2011
I'm using Fedora 15 with Gnome 3 on a 32 bit laptop. I noticed that there seems to be a huge memory leak issue with Gnome shell. When I restart it, it is around 20 MB. But it keeps rising, and after around eight hours, I noticed it was around 250 MB! I found a solution online that said to simply restart the shell if the gnome-shell memory consumption becomes too large. While this is fine as a temporary solution, I am looking for a permanent one. Is there a way to minimize/prevent the memory leak other than waiting for the next version of Gnome 3?
View 3 Replies
View Related
Mar 26, 2010
Brand new to Linux. Sort of got thrown in front of the bus if you know what I mean. The company I work for has a Linux server running CentOS 5.4 Company uses Linux for their Email, FTP and Web Server. Have been here a few years dabbling in and out of Linux and now that the old Admin has left the company.....I need to learn it ASAP. The server has run pretty solid until today.
The email server runs SendMail and SpamAssasin. Received lots of complaints today regarding extra SPAM. Noticed that SpamAssassin was not running. Tried to restart it through the WebMin tools and got the following error: Starting spamd: child process [3956] exited or timed out without signaling production of a PID file: exit 255 at /usr/bin/spamd line 2588.
View 1 Replies
View Related
Aug 27, 2010
I just ran into a weird problem with a CentOS 5.5 64-bit server running VirtualBox 3.2.8 (I would run Vmware Server 2.0.2 if not for the well known fact that Vmware doesn't care about its Server line anymore and it doesn't run on CentOS > 5.3 without major splits). I currently have two guests in that VirtualBox setup, a CentOS 5.5 64-bit and a Fedora 13 64-bit. The CentOS 5.5 guest shows less memory available than configured. If, for example, I give the virtual machine 512MB of memory the guest OS only recognizes 380MB. If I give it 768MB it only recognizes 637MB, and so on. I don't have that problem on the Fedora guest - 1024MB configured, 1024MB available.
View 13 Replies
View Related
Jun 23, 2009
i have problems with my failover machines and cant locate the cause. The last update is quite some time ago.So will it hurt my configuration when I do an yum-update? I have 5 Xen VM on each machine, also they replicate with drbd and heartbeat. Can I just update the dom0 and leave the VM untouched inside?
View 5 Replies
View Related
Sep 11, 2009
My server keeps freezing up requiring a hard reboot.
CentOS release 5.3 (Final)
httpd-2.2.3-22.el5.centos.2
Here is the error in /var/log/message
Sep 11 00:16:20 localhost kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
Sep 11 00:19:14 localhost kernel: [<c0459e7d>] out_of_memory+0x72/0x1a5
Sep 11 00:19:14 localhost kernel: [<c045b352>] __alloc_pages+0x216/0x297
Sep 11 00:19:14 localhost kernel: [<c045c5bf>] __do_page_cache_readahead+0xc4/0x1c6
Sep 11 00:19:14 localhost kernel: [<c0436d9a>] ktime_get_ts+0x16/0x44
[Code]...
View 5 Replies
View Related
Mar 18, 2010
I'm looking for a way to put a MySQL database in a server's memory. The disks aren't fast enough to keep up with the usage and I don't feel like going for a splitted web&db server yet because of the costs.
Because this involves risks (unless there's a way to read from the memory and write to the memory AND disks?), so I'd prefer that the DB gets copied automatically every hour or so to the local disks.
View 1 Replies
View Related
Jul 28, 2010
I have xen kernel on a 5.4_64 though it seems to have always done this regardless of version. When I add a virt the control set shows the total machine memory minus what I just allocated to the new virt. This also shows up in the system monitor as total available ram. Problem arises in deleting and making new virts. The memory never reappears as usable after deleting virts. So now after testing several different setups I'm down to 1.4 gigs showing available on the dom0. What can I do to recover this lost memory? I've searched, read, looked every where I could think of and there just doesn't seem to be any information about deleting virts only adding them.
View 5 Replies
View Related
Jul 30, 2011
I have a web server with the specs below and my apache server is being a hog using all my RAM 7gigs or 8gigs of ram. When there is a rush of traffic at once my whole server crashes and I have to reboot apache. The way my site is set up I have a tube script and I use the tube script to host videos on my forum I have 1000 videos on the tube script. I brought a bigger server and more ram because of the down time I been having . I am really trying to figure out why its crashing and using so much ram. I installed eAccelerator didn't seem to help with the apache server.
Intel Quad Core Xeon X3430 (4 x 2.40 GHz, 8MB Cache)
> 2-bay Supermicro Chassis and Motherboard
> 8 GB REG ECC DDR3 (twice your current setup)
> 250 GB Enterprise Grade SATA II
> 10 TB Bandwidth 1gig Uplink Port
> CentOS 64 Bit (Latest Stable)
TOP Command and free -m command screenshots are attached this is with only 160 people online at once
View 6 Replies
View Related
Jan 7, 2011
This is my first time to set up a production web server and I got some few questions on our migrations:
1. Our website from the Web Hosting company already gaining 5000000 hits/month and 35000 unique visitors/month, problem is we only have 2x4mb dedicated line here in the office and one IBM x3650 m3 for our LAMP, you think guys its enough to handle that kind of traffic if we start moving our web server here in the office?
2. If I register www.example.com to GoDaddy for example, do I still need to setup a DNS (BIND) server on our side?
3. This is my current Apache config:
Apache/2.2.3 (CentOS) DAV/2 mod_fcgid/2.3.6 mod_auth_kerb/5.1 PHP/5.1.6 mod_python/3.2.8 Python/2.4.3 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 mod_perl/2.0.4 Perl/v5.8.8 with PHP eAccelerator.
Anything to share to increase the performance of the web server?
View 2 Replies
View Related
Feb 24, 2010
I administer several web hosting (combined with mail relays and other services) production servers under Debian GNU/Linux. I began giving these public services two years ago via three boxes: the first is a gateway which controls traffic via iptables (it's attached to a DSL modem) between a public subnet (the DMZ) and a local network which connects several workstations. In the DMZ subnet I maintain two Pentium-III era boxes, they've grown in services since I set them up. Actually, I think I should buy new ones, but, you know, I want to save money and lenghten its life.
So, they've grown in data hosted, but I've never implemented a resilent backup system. I've set up some rsync tasks sheduled via cron jobs to copy the entire UNIX file system in each of the DMZ boxes, but I'd like to be prepared before an unexpected "real" crash of some HDD, I mean, some problem that renders a disk unusable.
AFAIK, sysadmins sync entire HD backups which are capable of recovering a system via swapping the unusable unit with the backup unit. Maybe the best fashion is to implement a RAID mirroring the unit, I'm I right? So, keeping my systems as they are, I mean, capable of using 4 parallel ATA units, what would you do? Use dump, rsync or some other way to have an operational second unit with an exact copy in a bootable second drive, in order to quickly swap it if the main unit fails?
Comes to my mind to partition a second unit (so making it bootable) and backup daily via rsync only those parts of the unix file system hierarchy which are necessary to boot a system properly. What do you think about this workaround?
View 6 Replies
View Related
Jan 24, 2011
i have a production server running RHEL 4.0 with 2x146 GB on a RAID 1 with OS and another storage with 2x300 GB on a RAID 1 with the application, it's the database and application.
No LVM was installing and configured before, and now the second array with mirror of 300 GB is running out of disk.
1. i have 2 new hdd to build another mirror 2 x 300GB.
how can i create a LVM to star using the vgextend anytime i need it?
View 4 Replies
View Related
Jun 22, 2009
We had servers that worked fine for years. After updated them to the latest version of CentOS (5.2 with latest updates), they keeps on hanging when being scanned by PCI Verdors (a Credit Card security standard). Basically, the scan causes httpd process to eat up all memory, and the server becomes unresponsive. Normal operations resume after the scan stops for 5, 10 minutes. Output from top looks like the following:
Mem: 1018988k total, 1007168k used, 11820k free, 432k buffers
Swap: 2096440k total, 2096440k used, 0k free, 4528k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13601 apache 16 0 289m 102m 456 D 4.0 10.3 0:15.25 httpd
12836 apache 16 0 330m 101m 232 S 2.4 10.2 0:17.57 httpd
12834 apache 16 0 341m 100m 292 D 8.9 10.1 0:17.70 httpd
12837 apache 16 0 317m 99m 456 D 6.0 10.0 0:17.66 httpd
12839 apache 15 0 327m 97m 232 S 0.0 9.8 0:17.37 httpd
13590 apache 15 0 287m 96m 228 S 0.7 9.7 0:15.18 httpd
12833 apache 15 0 333m 96m 232 S 2.2 9.7 0:17.58 httpd
12835 apache 15 0 322m 95m 232 S 0.4 9.6 0:17.50 httpd
12840 apache 15 0 310m 88m 232 S 6.9 8.9 0:17.34 httpd
12838 apache 15 0 297m 85m 232 S 1.3 8.6 0:16.52 httpd
12831 root 18 0 20644 1360 248 S 1.3 0.1 0:00.27 httpd
View 5 Replies
View Related
Jul 9, 2010
I assume that the maturity of UFW is irrelevant because in the end it is just a front end for iptables...
But just in case, is UFW mature enough for production use on a web server?
View 1 Replies
View Related