Programming :: Why The Stack Size Limit
May 1, 2011
Why is it in Linux that there is a stack size set by default? And why is it so small? (My system is set to 8192 kbytes.) And why is there a default limit on the stack size when the max memory and virtual memory size are, by default, unlimited? (Aren't they both fed from the same place ultimately?)
Reason I ask: I want to use recursive functions in my programming a lot more. Problem is, if the language (or implementation) doesn't happen to support tail-call recursion, then I can be pretty well certain that the first huge problem that gets thrown at my function is going to kill my program because the stack size limit is going to be quickly reached. Obviously, I can change the stack size limit for my own computers, but it doesn't feel so great knowing that most of the people who copy and execute my code will have probably have overlooked this. Anyway, does anyone know: is this small default stack size limit just one of those historical artifacts, or is there some technical reason for it?
View 5 Replies
ADVERTISEMENT
Jan 23, 2011
I seem to only be able to set my stack size on my linux server to 15000. If I increase it to 20000 I get a Segmentation Fault. how I can get the linux OS to increase the stack size? Code: threadRet |= pthread_attr_setstacksize( &m_ThreadAttributes, 15000 );
View 8 Replies
View Related
Jan 5, 2011
Why the thread stack size can not be changed after calling pthread_attr_setstacksize & pthread_create in a dynamic library? Detail: I write a file thread_factory.c and plan to build it and produce a dynamic library (libthread_factory.so) In the thread_factory.c , there is a routine
[Code]....
And after this, there is application, it will call fct_thread_create(STACK_SIZE_256KB), and then call pthread_attr_getstacksize(), but the stack size return always be a fixed value 0xa01000. (I tried this on Fedora12) But if I build the application source code with the file thread_factory.c directly, the stack size return is right as my expect. I checked the source code of glibc about the routine pthread_create() as below:
[Code]....
View 7 Replies
View Related
Jul 26, 2010
i have an application that launches several pthreads, i know that the default size used by Linux is 8Mb for each pthread. However i would like to optimize the total memory usage by my application by decreasing the default stack size of each pthread to the needed resources. My questions:
- Are there any rules to set the pthread stack size.
- How to compute the memory needed by each thread.
- Is the malloc call inside a thread counted from the stack size of the same pthread?
View 2 Replies
View Related
Mar 2, 2011
I have 2 directories in my home folder that I would like to set a size limit on. The directories are ~/backup and ~/temp. Is there an easy way to limit the size of a directory without having to make partitions?
View 4 Replies
View Related
Jun 22, 2011
Using getrlimit I am setting the core file size to be RLIM_INFINITY. But still the core file is not being generated,although in /var/log/messages it says a core is being generated
View 3 Replies
View Related
Jul 20, 2011
I need to increase the default stack size on Linux. As I know there are usually two ways:
ulimit -s size
/etc/security/limits.conf
The ulimit method only works as long as I am logged in.
limits.conf will work after a restart.
Is there a possible way to increase the limit without restarting?
View 2 Replies
View Related
Apr 17, 2011
Can anyone tell me that how to get information about stack, allocated by kernel to a running process? for this ,is there any api function,any system call is available in ubuntu 8.04 ?
View 2 Replies
View Related
Feb 5, 2010
Desperate to reduce RAM usage of my tiny VPS running Ubuntu 9.04 and Apache2.2.11, here I saw that:
On Linux, each child process will use 8MB of memory by default. This is probably unnecessary. You can decrease the overall memory used by Apache by setting ThreadStackSize used by Apache by setting ThreadStackSize to 1MB in.
So I tried to give the suggestion a try. But when I append:
ThreadStackSize 1000000
in my /etc/apache2/httpd.conf <IfModule mpm_prefork_module> directive, and restarted apache, it failed with this message:
Invalid command 'ThreadStackSize', perhaps misspelled or defined by a module not included in the server configuration
So I figured out that the relevant modules are neither enabled nor available on apache2. Now I am wondering whether there is a way to decrease the ThreadStackSize without the need to compile apache from source? If not, what should I do?
View 1 Replies
View Related
Dec 6, 2010
How do we find the size of a stack when a process is running?
View 1 Replies
View Related
Aug 26, 2010
I am trying to find the dyanmic heap size and stack size of a running process in rhel5.5 and rhel6.I read that the 23rd parameter in the file /proc/pid/stat gives the heap size.Can you elaborate more on this.Also is there any other way to do this?
View 5 Replies
View Related
May 14, 2010
i'm getting messages like these in my bash console
Code: STACK size: 98222 [0x7f665dbe4e00 0x7f665db25090] and i'm not quite sure what they mean, so far it looks it's related to the shell stack limit set by ulimit, however i've tried to change it (increasing it) however this message still persists.
View 9 Replies
View Related
Apr 2, 2010
We use VxVM and VxFS on HP-UX and Ie used them in the past on Solaris.ve found they are available as Storage Foundation v5.1 for 64 bit RHEL.(In fact there�s even a BASIC version for free on 2 processor systems). Previously wed run into a 2 TB limit for filesystems on the older versions we have on HP-UX. The data sheets at Symantec are pure marketing fluff. Does anyone know what the filesystem size limit is for 5.1 on Linux?
View 1 Replies
View Related
Jul 7, 2011
How do you put a limit in place for file caching in Suse 11.4?
My pc becomes usable on a regular basis with minimum cpu usage because I can't open new applications There are no error messages etc, the new apps just don't open.
free -m shows the vast majority of my memory is used by cache
free -m
total used free shared buffers cached
Mem: 3185 1048 2137 0 38 503 (this becomes max)
-/+ buffers/cache: 506 2679
Swap: 2055 0 2055
I've done ...
echo 3 > /proc/sys/vm/drop_caches
Sometimes an app will open after this but the system becomes unstable, locking up regularly.
I not sure why the default max is 100% file cache but I'd to put a sane file cache limit in place, like 40% or something. I've put limits in place in the past using a percentage and I've poked arround but i don't see the setting.
View 3 Replies
View Related
May 11, 2010
I have a large file (deflated size: 602191947)that is not saved in my Ubuntu One account. On sync'ing the file is being uploaded, and eventually reaches 602191947 - and then nothing more happens to this file - but sync'ing the following files in the queue goes on with success. I have tried manual upload with the same result. The file is still being marked as 'uploading' even after several tries and log ins/log outs, and reboots. So I was just wondering whether there is a file size limit - can't seem to find information regarding this.
View 5 Replies
View Related
Jun 17, 2010
Is there some way to limit the download size of updates for ubuntu? At the moment, update manager shows that I have some 300 MB worth of downloads. I can't find any way to deselect many updates at once either.
View 3 Replies
View Related
Jun 19, 2010
I really don't understand what's happening.I make a 3.5tb RAID array in Disk Utility, yet it makes it so that one partition is 3tb and the other is 500 gigs free!Why is that? Ext4 can do huge partition sizes I thought.
View 1 Replies
View Related
Oct 20, 2010
a possibly preposterous question. I am aware that you can designate a swap file or swap partition on your hard drive that linux uses as "memory". Suggested sizes for the swap file that I've seen range up to about 1024MB. Is there a limit to the swap file size that you can set?Basically I am running a perl script that processes a massive B) file (DNA sequence data), etc, and requires around 48 GB of memory to run, maybe a bit less. So, would it be possible to set a swap file to a massive, ridiculous size (~60GB oratever) and successfully run such a script on a desktop?Yes, I am aware that it would massively ow down the process. The thing is, if the perl script normally completes in about half an hour, and I can get it working on a desktop, I don't mind if it takes days or weeks to complete. I really don't. That's because it takes days or weeks to get access to a computer with the required grunt to do it.So, is this a stupid idea? Is it even possible? If so, given a perl script that normally completes in a half hour on a 48G system, if you do this, would it take days? weeks? decades
View 7 Replies
View Related
Oct 24, 2010
I have a 1TB external hard drive. I would like to create in it 10 folders:
Code:
I would then like to permanently mount each folder to its machine (I have 10 machines connected through a switch, so each machine will have a folder that is mounted to ONE of the 10 folders in the external hard drive).
My questions:
(1) Is this a good configuration? are there better ideas to give individual machines more space without replacing their hard drive?
(2) How do I limit each one of the folders ('folder1', 'folder2', ...., 'folder10') to a size of 100 [GB]? I don't want one folder (say, 'folder1') to grow in size and 'steal' the space designated to the other folders.
View 2 Replies
View Related
Nov 5, 2010
I've noticed that for files longer than about 8000 lines that gedit has problems opening the file. Was gedit not designed for long files or is there another problem? The same thing also happens on complicated html files. So I hope there is a way to fix this.
View 4 Replies
View Related
Mar 15, 2010
Does anyone know of a way of limiting a print-job size from samba?
I know how to limit a print job size form cups, and how to require x amount of free space before accepting a job. I've even dug up how to require x amount of free space for samba to accept a print job, but I can't see how to limit samba to only certain sized jobs.
Someone tried to print a >1G file to my print-server this morning, causing me to have a less relaxed Monday than I had hoped. Because it ran out of space before spooling, it was never limited by cups. Because I had to get rid of it ASAP so people could get work done, I have no idea who's it was, or where it came from. Scouring logs didn't give me any good leads either.
View 2 Replies
View Related
Sep 1, 2010
I have been trying to increase the message_size_limit on my Debian 2.4.26 box with postfix 2.3.8. For example, I set message_size_limit and mailbox_size_limit to 104857600 (100m) and restart postfix. Running postconf -n confirms that it has changed. However when I send a test message it kicks it back saying the message size limit is 16777216 (16m, which is, incidentally, the default value of the berkeley_db_create_buffer_size parameter)
View 10 Replies
View Related
Jan 4, 2010
I have a self-made application running on a small embedded Linux device (which should not matter) using syslog to output some error, warning or debug logs.There is a "better" syslog daemon installed, called syslog-ng, which have some more features,t I miss a very important one:How to limit the size of the logfiles to some dedicated megabytes. I was able to create rotating logfiles with the configuration in syslog-ng.conf:
Code:
destination testlog {
file("/var/log/test/log-$S_WEEKDAY"
[code]...
View 2 Replies
View Related
Dec 16, 2010
I have a single 6.2Gb file that needs to go on a fat32 format hdd, does anyone know of a way to split the file so it will fit.
View 2 Replies
View Related
Apr 21, 2010
Does Recordmydesktop have a file size limit? I'm considering using the Zero compression setting to keep CPU usage down, but I don't want to run up against a 2GB or 4GB file size limit. While I know some filesystems impose this limit, most screen recorders I've used have a 2GB or 4GB limit when recording, regardless of the filesystem.Is this an issue with Recordmydesktop
View 1 Replies
View Related
Mar 3, 2011
I have a problem with a PHP/Apache program.
The website creates a RPG character through a traditional Wizard. It calls itself with a hidden variable being the page number and tests which page and returns the page data with the page incremented.
Each page should be treated as a seperate page and so would be unique. I am echoing the contents of POST to the top of the page and so I can see variables being returned. When I get data from an Ajax query from page three it saves the data (23 post fields of no more than 25 characters for each field). Page four does the same but with less fields - but it is NOT returning the data - only four fields being those that were originally posted.
I cut/paste the function from section three to section four and changed the displayed text and the names of variables to test so there are no code errors, since page three works and is saved to a database.
So the only option is that there is a PHP or Apache2 issue when POST variables are returned? I am completely out of ideas as to why this would even be an issue or how it could possibly appear.
Is the number of variables an issue? This page is less than the previous page.... And the form is POSTed...
PS: I am getting NOTICE errors from PHP being the POSTed variables that are not displayed/returned... I used:
error_reporting (E_ALL ^ E_NOTICE);
to stop these form being reported but do I need to test each one? PPS: Using If Isset($_POST['xxx']) does NOT allow that variable through...
PS: I have the default Ubuntu 10.04 Apache2 with all the ubuntu 10.04 updates...
View 3 Replies
View Related
Nov 26, 2010
I m using squid 2.7 Stable 9 and Dansguardian 2.10.1.1, i have compiled both squid and dansguardian, i have enabled follow_x_forwarded_for in squid to make clients IPs visible to squid, i have also set x_forwarded_for=on in dansguardian, this is working fine and clients ips are visible to squid. Now i want to set down-loadable file size limit upto 50 MB in squid by using the acl reply_body_max_size 52428800 allow mynetwork for every user except few users the above acl is not working properly. mynetwork is our private network which is 192.168.0.0/16.
When i set the acl reply_body_max_size 52428800 allow localhost . it works fine but only for localhost. I want to allow upto 50 MB down-loadable file size to every user in my network except a few users whom will have access upto 500 MB down-loadable file size.
View 2 Replies
View Related
Sep 13, 2010
How can i limit user to their mailbox in specific size.
View 2 Replies
View Related
May 7, 2010
I'm trying to copy a 7.8GB tar.gz file to an external hard drive via command line. It gets to an even 4GB and stops, and gives an error that says "file size limit exceeded." I edited some file at /etc/security/limits.conf to look like: "root hard fsize 10024000" but that didn't do anything at all. Yes, I am copying this as root.
View 9 Replies
View Related
Apr 10, 2010
using kubuntu 9.04 on AMD 64,working with ISPconfig panel.I have postfix configured and have no problem getting mails with small attachments, but when they pass certain size I don't get them.Where can I configure this?
View 3 Replies
View Related