Ubuntu Servers :: How To Decrease Thread Stack Size
Feb 5, 2010
Desperate to reduce RAM usage of my tiny VPS running Ubuntu 9.04 and Apache2.2.11, here I saw that:
On Linux, each child process will use 8MB of memory by default. This is probably unnecessary. You can decrease the overall memory used by Apache by setting ThreadStackSize used by Apache by setting ThreadStackSize to 1MB in.
So I tried to give the suggestion a try. But when I append:
ThreadStackSize 1000000
in my /etc/apache2/httpd.conf <IfModule mpm_prefork_module> directive, and restarted apache, it failed with this message:
Invalid command 'ThreadStackSize', perhaps misspelled or defined by a module not included in the server configuration
So I figured out that the relevant modules are neither enabled nor available on apache2. Now I am wondering whether there is a way to decrease the ThreadStackSize without the need to compile apache from source? If not, what should I do?
I seem to only be able to set my stack size on my linux server to 15000. If I increase it to 20000 I get a Segmentation Fault. how I can get the linux OS to increase the stack size? Code: threadRet |= pthread_attr_setstacksize( &m_ThreadAttributes, 15000 );
Why the thread stack size can not be changed after calling pthread_attr_setstacksize & pthread_create in a dynamic library? Detail: I write a file thread_factory.c and plan to build it and produce a dynamic library (libthread_factory.so) In the thread_factory.c , there is a routine
[Code]....
And after this, there is application, it will call fct_thread_create(STACK_SIZE_256KB), and then call pthread_attr_getstacksize(), but the stack size return always be a fixed value 0xa01000. (I tried this on Fedora12) But if I build the application source code with the file thread_factory.c directly, the stack size return is right as my expect. I checked the source code of glibc about the routine pthread_create() as below:
i have an application that launches several pthreads, i know that the default size used by Linux is 8Mb for each pthread. However i would like to optimize the total memory usage by my application by decreasing the default stack size of each pthread to the needed resources. My questions:
- Are there any rules to set the pthread stack size. - How to compute the memory needed by each thread. - Is the malloc call inside a thread counted from the stack size of the same pthread?
When I set the stack base address of the child thread using the POSIX library function "pthread_attr_setstackaddr()", I am unable to access the memory contents of its parent. The data-structures that are created on the HEAP of its parent using malloc() are either getting destroyed or unaccessible when moving to the context of the child thread. These data-structures are being passed as an argument to the child thread.Even if I make these variables global then also it is not working.pthread_attr_setstacksize(tattr, ...);stackbase = (void *) malloc(...);pthread_attr_setstackaddr(tattr, stackbase);But when I create the child thread without setting its stack base address using that pthread_attr_setstackaddr(), then it is able to access the parent's memory contents.
I am new to Fedora, and I noticed when I opened my home folder that I only had 54GB. My HD is 120, and when I used the disk utility, I noticed that 64GB are in ext4! What?! This was not a problem in Ubuntu, so why has it happened now? Is there anyway I can decrease the size of ext4 and give some of it to Fedora?
On F12 and sure need to upgrade. Way back when I first had a bad disk on the "system" disk that was 80G, I only had a 200G lying around. Next time that happened my other 200G wasn't "big" enough it said, so I put a 500G in there. Now it seems I got more bad blocks etc but I need to lower the size 'cos I don't want to put a 1Tb HDD in there. My question from all of this is, how do I decrease the image so I can put it all on a smaller HDD?
Used space on the system disk's partitions is about 30G, so an 80G disk should be sufficient. What I can think of is that I need to "move" all data to the "beginning" of the HDD, then make an image of it but and the entire disk, just the data. I've tried that with no luck since the image seem to get as big as the HDD, hence why I always needed to increase the HDD all the time.
Currently, I have on internal HDD 160GB in size. 20GB are for windows XP partition and the rest is assigned for ubuntu partition(s?). i want to make it now more equal in size, but how can i do that? I'm using ubuntu 11.04...
1- Does Linux operating system decrease the original size of the HDD? because first I was only using a Windows 7 Ultimate on my Laptop Lenovo G550 4446 the D:/ partition size was 140 GB Free of 190 GB.. After I installed the Linux ubuntu along with the other Windows 7 I found the D:/ partition size became 68 GB free of 122 GB... Why the hell the size decreased?? Did the HDD destroy?
2- How can I hack Wireless Network of type WPA2/WPA or WEP using the linux? I have heard that I can do that using Shell Konsole, but I can't see Shell Konsole in my Linux ubuntu 10.10... Is it required to be installed over the internet or something, or if it is already installed within the operating system, can you tell me where I can find it?
How can I decrease the amount of video memory in a debian squeeze laptop? The amount of video memory is 1GB and I want to reduce it to 512MB or 256MB. How can I do this
Why is it in Linux that there is a stack size set by default? And why is it so small? (My system is set to 8192 kbytes.) And why is there a default limit on the stack size when the max memory and virtual memory size are, by default, unlimited? (Aren't they both fed from the same place ultimately?)
Reason I ask: I want to use recursive functions in my programming a lot more. Problem is, if the language (or implementation) doesn't happen to support tail-call recursion, then I can be pretty well certain that the first huge problem that gets thrown at my function is going to kill my program because the stack size limit is going to be quickly reached. Obviously, I can change the stack size limit for my own computers, but it doesn't feel so great knowing that most of the people who copy and execute my code will have probably have overlooked this. Anyway, does anyone know: is this small default stack size limit just one of those historical artifacts, or is there some technical reason for it?
Can anyone tell me that how to get information about stack, allocated by kernel to a running process? for this ,is there any api function,any system call is available in ubuntu 8.04 ?
I am trying to find the dyanmic heap size and stack size of a running process in rhel5.5 and rhel6.I read that the 23rd parameter in the file /proc/pid/stat gives the heap size.Can you elaborate more on this.Also is there any other way to do this?
i'm getting messages like these in my bash console
Code: STACK size: 98222 [0x7f665dbe4e00 0x7f665db25090] and i'm not quite sure what they mean, so far it looks it's related to the shell stack limit set by ulimit, however i've tried to change it (increasing it) however this message still persists.
We have recently built some RAC (OS:RHEL55) servers and after the Oracle guys have installed their application, somehow the directory / is using the maximum space. I contacted the Oracle team & they say that their RAC installation doesn't create any files in the / directory. This is the o/p of '/' directory file system:
Filesystem Size Used Avail Use% Mounted on /dev/mapper/datavg-vol2 498M 382M 62M 88% /
Also, when I checked the file sizes, I found that the following files were taking more space:
I don't know what these files are doing there, when I did a cat and cheked, I found the files containing this data:
nf_tre--
stem_dbusd_var_run_t...and some stuff like this Unable to decide whether or not to remove these files. Also, is there any way to find out what files are taking more space and whether they can be deleted or not? in order to free up some space in the / direcoty. As there are 10 RACs that we've build, I got to do something to fix this for all of the 10 servers.
I am going to use "pthread_setaffinity_np" to bind a thread to a specific core. My application has two threads. I have used mutex to assign a specific id to each thread and then bind that thread to a core different from another core. but it seems that the os assigns both thread to one core.What should I do to bind each thread to a specific core?
There seems to be quite a craze about reducing the Ubuntu boot times to 10 seconds. I never really reached the 10 second mark.. for me it was more like 30 seconds from when I selected Ubuntu in GRUB to when I was prompted to type in the password in GDM. First of all, is this what is meant by boot time? Secondly, I know this already is quite a good 'boot' time, but just for the sake of tinkering around, is it possible to reduce the time that elapses between the GRUB menu and GDM?
The track pad on a Macbook Pro 8.3 is hypersensitive and interprets the lightest touch as a click. Even brushing the track pad with the sleeve of my shirt counts as a click. How do I decrease the sensitivity of the track pad?
ok I have the original install 170GB partition mounted on /
I need the following; OS on / 30 GB data mounted on / data 140GB
Fdisk would have been a joke, but the box was setup via lvm, so I am learning on the fly. I was able to shrink the partition using; lvreduce -L 30G /dev/mapper/server-root
vgcreate shows; VG Size 169/76GiB Alloc PE / Size 9453 / 36.93GiB Free PE / Size 34005 / 132.83 GiB
Now, per my DBA, I need that 130 on a seperate 'partition' and I am not 100% sure on the next step. I am reading on vgcreate, lvcreate, etc.
I really don't understand what's happening.I make a 3.5tb RAID array in Disk Utility, yet it makes it so that one partition is 3tb and the other is 500 gigs free!Why is that? Ext4 can do huge partition sizes I thought.
I have a server setup running VirtualBox and several Windows guests. I'm running the box headless and start the windows machines using
Code:
sudo vboxheadless -s "WinXP Pro SP3"&
via SSH session from my MacBookPro in terminal. I then connect to the Virtual Machines running MS Remote Desktop from the MacBookPro. All of this works perfectly except that I've run out of space on my partition for Virtual Machines and need to create a few more. I have plenty of room on the HDD but when first installing Ubuntu Server I only partitioned and formated about 1/4 of the drive. Is it possible to run a command in my SSH session at the command line to partition the unused portion of the HDD, format it, and expand my current partition into that space? Or, do I have to use something like gpartedlive, boot from the CD and do the partitioning?
I tried to install Portable VirtualBox using wine and even though I installed it on my /host/ folder (with 19 gb free) it downloaded some massive file on my Wubi installation (with 60 mb free) and now I am down to 3 mb left on Wubi and I can't find the massive file that it downloaded. Tried using the disk usage analyzer but nothing came up. Windows is unbootable so I can't use it.
Before this, Ubuntu would constantly decrease the amount of disk space I had free for no reason as well. It would jump from 120 mb one day to 50 mb.I moved my documents to my Windows folders but the disk space only stayed at 100 for another day or so before it went down again. apt-get auto/clean, localepurge, and deborphan are completely useless and there's something else going on behind the scenes here that I don't know about.Using Ubuntu Jaunty.
I am runnin Ubuntu 9.10- the Karmic Koala on acer aspire laptop. since everythin in this netbook is crap. so I hardly can see movies that so much going on in them( when so much happens, the screen freezer for a while therefore i miss the most interesting bits) I used kinda every players. the only one that works for me is mplayer since it does not require high spec it works good but it has its own costs. if I use mplayer then i have the porblem of the sound (eg i can hear stuff before it happening) I am now just wondering if there is a way to decrease the quality. I really dont care if i have great quality or not as long as i can c whats happening ( at least for this netbook anyway)
I was wondering why the chached RAM doesn't decrease at the same rate or near that when scanning is done & the scanning software is closed as it increases during scanning ? When the cached ram reaches a certain point while scanning, the software crashes & I loose my scans
I've just installed Ubunter Server 9.04 (after having installed 9.10, having problems with it, and uninstalling it). Mostly, 9.04 is working well so far, but for one nuisance: the font is huge.
Well, okay, not huge, but big. On my other machine, running Ubuntu 9.04 desktop, same size monitor, I have the resolution set to 1440x900 which gives me 46 lines on the CLI (with the window maximized, but not full-screen). On the server machine, however, I'm getting only 25 lines -- and there's not even a window title-bar, menu bar, or panels taking up any of the landscape.
So my question is this: Not having a GUI nor any of the associated display-management software, how can I set the screen resolution or otherwise get my display font smaller, using the CLI?
=> /boot is using 95.3% of 129MB Size 129.12 MB / Free 113 kB So why is it using so much? And how can I increase the size? Ubuntu 10.04 lts server 64 bit.
Based on the reading I've done over the past 48 hours I think I'm in serious trouble here with my RAID 5 array. I got another 1 TB drive and added to my other 3 to increase my space to 3 TB...no problem.
While the array was resyncing...it got to about 40%, I had a power failure. So I'm pretty sure it failed while it was growing the array...not the partition. Next time I booted mdadm didn't even detect the array. I fiddled around trying to get mdadm to recognize my array, but no luck.
I finally got desperate enough to just create the array again...I knew the settings of my and had seen some people have success with this method. When creating it, it asked me if I was sure because the disks appeared to belong to an array already, but I said yes. The problem is when I created it, it created a clean array and this is what I'm left with.
Code: /dev/md0: Version : 00.90 Creation Time : Sun Sep 5 20:01:08 2010 Raid Level : raid5 Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
[Code]....
I tried looking for backup superblock locations using e2fsck and every other tool I could find, but nothing worked. I tried testdisk which says it found my partition on /dev/md0, so I let it create the partition. Now I have a /dev/md0p1, which won't let me mount it either. What's interesting is gparted reports /dev/md0p1 as the old partition size (1.82 TB)...the data has to still be there, right?