Programming :: Why Different Signals Send To Programs That Exceeded Memory Limit Set By Setrlimit()?
Sep 2, 2010
I'm having a problem with setrlimit() under linux.
If i used setrlimit(RLIMIT_AS) to set a hard ceiling of virtual memory usage first, then request memory more than that, shouldn't i receive a signal like SIGSEGV?
First i tried the command ulimit in bash, which acted as if i called setrlimit(). i tried 3 programs that overflowed the memory limit i set. i though all the programs would be terminated by a SIGSEGV. but the pascal program received SIGKILL; the C++ program got SIGABRT› the C program, SIGSEGV.
I though there maybe something different between setrlimit() and ulimit, so i wrote a program in C++, fork() first, setrlimit() then execv() in the child, and wait(&status) in the parent. but i got the same result.
I was wondering why these could happen, and could anyone tell me how could i deal with them? i mean, how can i judge that the program exited abnormally because it exceed memory limit?
View 2 Replies
ADVERTISEMENT
May 7, 2010
I'm trying to copy a 7.8GB tar.gz file to an external hard drive via command line. It gets to an even 4GB and stops, and gives an error that says "file size limit exceeded." I edited some file at /etc/security/limits.conf to look like: "root hard fsize 10024000" but that didn't do anything at all. Yes, I am copying this as root.
View 9 Replies
View Related
Apr 12, 2010
I downloaded pdftk 1.41 fromand installed on Red Hat Enterprise Linux 4, 32 bitI am primarily using this utility to uncompress pdf files to remove the 'Flate' compressionIt works good with small pdfsHowever, when i use to uncompress pdf files of size 35MB or more, the uncompressed output file grows up to 2GB and then the uncompression fails with error:"File size limit exceeded"I can concatenate two files with output file size upto 3GB in size, so 2GB is not the limitation at the linux level
View 1 Replies
View Related
Nov 2, 2015
After receiving no response either here or on IRC, I copied 80 package files to a temporary directory and ran dpkg-scanpackages . /dev/null > Packages with the expected result. The curious part is the delay when output redirection is not used: nothing appears until the script completes, when the result is dumped to the screen. It therefore appears that there is an upper limit to the number of packages that the script can handle, somewhere between 80 and 42,474.
Is this an undocumented feature, or just a peculiarity of my system?
I'm new to Debian and wanting to set up a local repository on my work drive. After following instructions online and copying all packages (~43,000) from the DVD set into /work/Debian/8.2/packages/ I ran dpkg-scan-packages as instructed:
Code: Select all# dpkg-scan-packages . /dev/null | gzip -9c > Packages.gz
This produced an empty file. I then ran dpkg-scanpackages with no output redirection expecting to see a flood of text on the screen, but all I got were error messages suggesting that it can see the .deb packages but is not parsing them:
Code: Select allroot@qbx:~# dpkg-scanpackages /work/Debian/8.2/packages
dpkg-deb: error: invalid character ' ' in archive '/work/Debian/8.2/packages/libshhopt1_1.1.7-3_i386.deb' member 'debian-binary' size
dpkg-scanpackages: error: couldn't parse control information from /work/Debian/8.2/packages/libshhopt1_1.1.7-3_i386.deb
[Code] ....
This all seems in accord with the man page, and it's so simple I'm wondering what I'm missing.
View 1 Replies
View Related
May 12, 2010
I have a VPS server with 512 MB memory. The php.ini is set so script memory limit = 16 MB. However, I have noticed in my top report, instances like the following:
Quote:
5484 coldclim 25 0 46476 32m 5920 R 0.0 6.4 0:00.93 php
The bold number of 6.4 is the % of sever memory this process is using. 6.4 % of 512 MB of memory is about 32 MB of memory, so it appears that this isn't being limited by php.ini. Am I correct? This leads to the next question: Is there some way to limit the amount of memory a single suphp process can use? (Basically, something like the setting in php.ini which limits suphp processes in the same way.)
View 2 Replies
View Related
Mar 10, 2010
We all know how to kill multi process but if we want to send different signals to different process like to stop,-9,to hungup simultaneously at the same time then how we will do it,is there any particular command to do it.
View 1 Replies
View Related
Feb 22, 2010
I have a desktop hooked up to a router and plugged into my TV. I like to run Totem and have videos running full screen on my TV. I would like to control that Totem using my laptop (connected to the same router over wifi). How would I go about doing that? I connected to my desktop using SSH and tried to send signals to Totem, but nothing worked. I tried using VNC, and that was ineffective as well.
View 1 Replies
View Related
Apr 23, 2010
I am writing a kernel module which need to do something at some interval. Now this problem can be solved by using a user process, which will send signal to the kernel and the kernel would do accordingly. But it would be nice, i could do it within the kernel module itself. Is there any way to use SIGNALs inside the kernel module?
View 3 Replies
View Related
May 19, 2009
I'm looking for a solution for sendmail to limit the number of emails send per miniute per IP. For example all my local computer user with ip 192.x.x.x need to able to send 10 emails/minite (emails, not connections!. The rest of the world can send for example 200 emails/minute to the mailserver. If the amount of emails per minute is exceeded, sendmail needs to block receiving emails from the spesific IP. I want to do this to stop spaming from my local network. Is it possible?
View 1 Replies
View Related
Apr 26, 2011
Is it possible to handle multiple signals in just one handler? I only know singal() catches one signal at a time
View 2 Replies
View Related
Jul 23, 2011
My program is creating 4 threads per transaction. Threads doing nothing but simply sleeping.
Now, when transaction ends, I want to wake up all the threads from sleeping. For this I am using pthread_kill() to wake up the threads using signal SIGUSR2.
Problem raises when I put more transactions(eg. 100 trans). My process gets hangup.
View 3 Replies
View Related
Apr 24, 2010
I've written a program for a class that my professor will be testing in various low memory environments to see how it behaves when the program runs out of memory. Is there a way I can simulate the execution in a low memory environment without creating a virtual machine?
View 1 Replies
View Related
Jun 26, 2010
I have a 32 bit Ubuntu installed and my Laptop has 4GB RAM, but only 3GB is considered by Linux. My question is: what is the reason for the upper limit on physical memory ?
Code:
dmesg | grep Memory [0.000000] Memory: 3052428k/3112960k available (4673k kernel code, 56364k reserved, 2121k data, 656k init, 2200904k highmem) I am familiar with the virtual memory concept where linux splits upper 1GB for kernel and lower 3GB for user processes. In total, linux 32bit can address 4GB virtual addresses. Does this meant that 1GB of physical memory is already mapped to 1GB of kernel space and Linux only shows the remaining 3GB physical memory left for the user in the above command.
I did some searching on the internet and found some articles related to this, but it only confused me further since some articles suggest 4GB is the upper limit with mentioning whether it's virtual or physical memory, some bring in the concept of PAE, etc. I'm relative new to Linux's memory management, so it'd be really helpful if someone could answer this.
View 4 Replies
View Related
May 9, 2011
When I run ardour sound editing I get this message , but it starts ok Your system has a limit for maximum amount of locked memory! This might cause Ardour to run out of memory before your system runs out of memory. You can view the memory limit with 'ulimit -l', and it is normally controlled by /etc/security/limits.conf
bash-4.1$ ulimit -l
64
my limits.conf is like this
audio - rtprio 99
@audio - memlock 250000
[Code]...
View 2 Replies
View Related
Aug 13, 2011
I was looking into using control groups to limit the memory usage of each user on my CentOS system. I was told that this required me to recompile the kernel to have cgroup support. Is this true? Or is there a kernel module that will allow cgroups to work for users and groups on the system without kernel re-compile? Or, is there another way to limit the users memory usage? I have tried ulimit and it doesn't seem to work right.
I ask since this setup will be on a VPS system, that means to re-compile the kernel I need to use Xen instead of OpenVZ. Plus I have never in my life re-compiled the kernel, least of all with different modules ha ha ha so I would have to pay my NOC to do it. So if I don't HAVE to recompile the kernel to get cgroup support.
View 2 Replies
View Related
May 24, 2010
i got this simple example of a code, its basicly Anjuta Gtkmm empty project. [URL] and trying to connect button signal to a function at line 67, however i receive errors during compilation, and i dont know what's wrong error output [URL]
View 1 Replies
View Related
Dec 22, 2014
How to limit access flash memory (read / write) for users on debian?
View 2 Replies
View Related
Jun 25, 2010
I presently have Fedora 12 running on a Dell Optiplex with a Pentium 4 CPU. I want to buy a new computer, probably from Dell, I don't completely understand the options I have available for which Fedora 13 will run.I more or less understand what 64 bit means since I also have an HP with an AMD 64 bit processor. But are there other options which will allow me to exceed the i386 memory limit? How about dual processors?
I used to understand CPU architecture pretty well, but i've lost track of more recent developments. Can anyone recommend a primer of CPU types, including assoicated memory limits? Similarly, I would like to be sure that Fedora 13 will run on whatever machine I decide to get. So a list of available processors, which are equivalent to which, and which Fedora 13 has been tested on,
View 7 Replies
View Related
May 28, 2010
I have an Ubuntu server running in our small office. Among its many duties is report generation. It uses PHP and DOMPDF (a PHP library for converting HTML/CSS to PDFs for printing). PHP's default memory limit of 32MB is not even close to being enough to pull large amounts of data from the database and generate images/tables/PDFs with that data.
I increased the memory limit to 64MB and that is adequate for reports under 3 pages or so (varies based on table complexity, images, etc). If any user tries to generate a report longer than that, PHP just throws a "out of memory" error and doesn't make the report.
My question is: what are the possible consequences of increasing the memory limit yet again to 128MB or maybe even higher? The server isn't terribly powerful. It has 2GB RAM and 4GB swap space. I know that isn't much but this is a small office and at most I can only see two or three people trying to run reports at the same time. As for security, apache is currently only serving pages in the local network, but sometime within the next year I'll probably have it hosting a public website (currently using a hosting service). Is a high memory limit a potential security risk when exposed to the internet?
EDIT: Sorry, PHP's default memory limit is 16 not 32 as I said. Question still stands, however
View 5 Replies
View Related
Nov 8, 2010
I would like to be able to monitor which programs are allowed to access the internet, but a search for programs to do this has turned up nothing. Preferably, I would like a notification to come up every time an application uses the internet. Is there any (n00b friendly) software available to do that?
View 2 Replies
View Related
Apr 26, 2011
I noticed that firefox sometimes uses a lot of memory. Can something like setrlimit be used to control it? I tried to use it on the command line, but it didn't work.
View 1 Replies
View Related
Mar 21, 2011
I am new to C and linux. My code below does arbitary writes but I cant figure out where or how it does it.
I am calling the insertNode() function with seq = 'MISSISSPPI$' and alphabets = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ$'
Code:
Weird behaviour I should mention is that when I check for NULL pointer in node->child[index], the unassigned values are not null anymore, they point to arbitary memory.
View 12 Replies
View Related
Apr 13, 2010
How do I write a script for my Linux that can show me total memory vs used memory and have it email me results if it's over 70 percent?
View 2 Replies
View Related
Aug 25, 2010
Is that possible that SHM shared memory is counted as cache memory on Linux with kernel 2.6.18?If find it really odd since this memory is not file backed, but I have a piece of code that loads data using shm_open+mmap, and it generates an amount of cache memory in /proc/meminfo that corresponds exactly to the amount of shared memory (I load that data from a file but I am using posix_fadvise(fd,0,0,POSIX_FADV_DONTNEED) to ensure this file is not cached and I made sure that it is working as expected). As far as I know SHM memory was not tagged as cache memory with kernel 2.6.9.If it is the case it is really unfortunate since normally cache memory can be considered to be part of the "available" memory since it can be flushed promptly but this is clearly not the case with SHM memory... Is there an easy way to get the total amount of used SHM memory on a system?
View 4 Replies
View Related
Apr 30, 2010
Clean install Ubuuntu 10.04 on SONY VAIO VGN-NS105N.
I can't get either Evolution or Thunderbird programs to send or receive.
I have Ubuntu 9.04 installed in another partition and everything works there. I carefully copied the setup from 9.04 to 10.04.
Of course the Internet is connected, and "Network Tools" work as advertised.
I am wondering if I am alone with this problem, or are there others?
View 2 Replies
View Related
Jul 24, 2009
I have a few multi-user servers in an academic laboratory. I am having a problem with some users maxing out the available RAM, causing such sever slowdowns the machine essentially crashes. My servers are Dell Power Edge's running Ubuntu 8.10 Server Edition (Not my choice). I would like to set a maximum limit on the amount of ram a user can utilize. This morning I experimented with setting limits via /etc/security/limits.conf and using ulimit. Neither of them prevented my test program, a simple infinite loop of mallocs, from crashing the server.
View 7 Replies
View Related
Oct 23, 2009
What programs can be used to test and find faulty memory (RAM, Video adapters, cache)?
View 2 Replies
View Related
May 1, 2011
Why is it in Linux that there is a stack size set by default? And why is it so small? (My system is set to 8192 kbytes.) And why is there a default limit on the stack size when the max memory and virtual memory size are, by default, unlimited? (Aren't they both fed from the same place ultimately?)
Reason I ask: I want to use recursive functions in my programming a lot more. Problem is, if the language (or implementation) doesn't happen to support tail-call recursion, then I can be pretty well certain that the first huge problem that gets thrown at my function is going to kill my program because the stack size limit is going to be quickly reached. Obviously, I can change the stack size limit for my own computers, but it doesn't feel so great knowing that most of the people who copy and execute my code will have probably have overlooked this. Anyway, does anyone know: is this small default stack size limit just one of those historical artifacts, or is there some technical reason for it?
View 5 Replies
View Related
Mar 2, 2011
I have a file with 200 000 lines and I want to append the fields of each line based on matching first field. The resulting file should have 70 000 columns but has "only" 18 000. The command I'm using is working perfectly with a smaller file, wich lead to 14 000 columns. Could there be a limit in number of fields that awk can handle ? Here's my awk command :
Code:
awk -F, 'END { for (k in _) print _[k] } { _[$1] = $1 in _ ? _[$1] FS $4 : $1","$4 } ' file > out
Also, this command writes ^M (windows line break) after each columns. Removing them is easy but where do they come from ? Working on Ubuntu 10.10
View 4 Replies
View Related
Dec 28, 2010
my secure log is flooding with these messages..
sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'hard'
Dec 28 22:42:29 yn54 sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'soft'
Dec 28 22:42:29 yn54 sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'hard'
View 3 Replies
View Related