CentOS 5 Server :: Set Ulimit Open Files Permanently?
May 4, 2011
Newbie here! Our website CMS is a Tomcat webapp, which runs on a CentOS 5.6 release (Final). The webapp needs a permanent increase of the max. open files value. Currently, the site is "crashing" frequently due to continuous "Too many open files" exceptions that eventually will occur when traffic increases.
This is what I've done to try to increase the max. open files value code...
But still, when I log in (as any user, incl. root), ulimit -n shows 1024, not 16384. Am I missing something here? And, more importantly; Will Tomcat be able to open > 1024 files after my changes mentioned above?
I wrote this because i was able to use openmpi to run mpirun on my 12-core workstation rather happily since day 1 I setup the system a few months ago. Yesterday when I tried to run a big job under mpirun, the job crashed rather quickly, the error message was something like mpirun process exited blah blah with signal 11 (Segmentation fault). Interestingly (or annoyingly) a job required less memory ran okay.
Since I never had this problem before, I thought it was the hardware failure. I called my IT guy to explain the problem and he is kind enough to suggest to put a line
ulimit -s 40960
in my .bashrc. And it works! But I have no clue why mpirun misbehaves out of a sudden, and that ulimit setting solves the problem completely. I would like to learn from this incident.
We are using a hadoop server & we dont want the server memory to overload. So, I have set the ulimit for max memory size and for some time it was working fine but memory was overloading before EOD. I came to know about soft & hard limits and set the hard limit for the maximum memory size in /etc/security/limits.conf file. But, the limits were not shown in ulimit -a command ouput. So, I restarted the server. Then, the limit was shown in the ulimit command output. But, memory is still getting overloaded as you can see the memory used is more than the limit set. Anyone kndly suggest me on this issue.
I installed a new media drive that I will be using to share with a windows 7 laptop using samba. After days of frustration, I figured out that the sharing is not working because I have to set the permissions for the NTFS drive when it is mounted. Once it is mounted, using chmod, chown or right-clicking in nautilus does not work. As a result, when I try to access the files from my windows laptop, it keeps saying that it can't find the share (due to the permission issue). How do I change the fstab to automatically mount the ntfs drive, and have completely open permissions (read/write/execute by everyone)?
I have been trying to install centos5.3 with an remote hdd (via ISCSI with Intel Nic and remote iscsi boot). If I install the centos with default partitioning (LVM, just /, swap,and /boot partitions), everything installs fine. However, if I install with custom partitions, particularly putting /var and /usr in their own partitions, I get all sorts of error during reboots. It tells me that the system cannot touch a couple files (in /var and /usr) because they do no exist. However, after boot, I can see these files and they existThe following files I can see so far that are being reported missing at boot are:
I have just finished an install of Cent 5.5. I loaded up a trunk for a software application I want to compile on cent, but I cannot read the .cpp files. The issue seems to be with the file extension. If I change the extension to .txt, they open just fine. My .FOR, .FPP, and .DAT files display fine, but the .cpp and .h give me an error, "Couldn't display...". What is the problem here that won't let me open these files in gedit?
i have been trying to complete the following project1) Configure a FTP server where we can upload and download files.........2) server must run at 9 pm & stop at 9 am automatically ............although the first task was easy ,i have no idea how to accomplish the 2nd task(not to mention I'm a new user)
have virtualservers on virtual harddisk due to system crash when im trying to access the servers its giving the error as failed to start virtual machine fileserver medium /rootfiles/vdi/fileserver.vdi is not accessiblevd:error verr_media_not_recognized opning image file /rootfiles/vdi/fileserver.vdi (verr-media-not-recognized)result code:ns-error-failureour company network gone down.no internet is coming.so if i want to install dhcp i need dhcp rpm package for fedora12 2.36.31.9-174fc12x86_64
I can go to Places-Network on the gnome menu and pick the share I have on my other machine (I have read-write since the account is in the same user name) I can browse the files on the server using Nautilus, however when I try to open OpenOffice files it fails and won't open them. I have to manually mount the files system as cifs before I can open OpenOffice files. Anybody know why this is happening.
Can syslog be used to "watch" other log-Files from other software? I would like to get an info in messages if a logfile of squid is changed/something is added.
I'm a java developer that must use the official JDK distribution. We tried using the open version and it gave us problems. We run the same java in DEV as we do in PRO.
OpenOffice INSISTS -- CANNOT LIVE WITHOUT -- the openjdk... EVERY TIME I try to update, it wants to install that package!
Is there a way that I can block the system from installing a package? Maybe I could just tell people to do --skip-broken with all their upgrade commands, because I disabled that package somehow? Anyone know how to do this?
Just installed a new CentOS with the control panel Interworx. Everything fine, BUT. I can't add new ipadresses cause there is no network device.When i do 'ifconfig' i get the following:
[root@cobra ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:1C:C0:A7:25:9F inet addr: Bcast: Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
in my office we are using redhat server and 20 windows client machine. some times while viewing the server stored files or not able to view.yesterday i saw a problem all the files are showing but not able to open a single fine after restarting the computer iam able to open the file.
I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:
So after compiled the cgi script, I put it in the folder /var/www/cgi-script/
I try to access via my web browser and in the apache error log : I have this "/var/www/cgi-bin/mapserv" : error while loading shared libraries: libclntsh.so.11.1: cannot open shared object file: No such file or directory"
In my httpd.conf, I just add the following in order to provide the shared libraries for my cgi script :
I have tftp-server running on Centos 5. Clients which are on the same subnet as the server are able to get and put without problems. I have a client that is across the internet that is having trouble getting files from my tftp server. A tcpdump reveals that the client is requesting the same file over and over again. In /var/log/messages, I am see the following error repeated over and over until the client finally gives up.
localhost in.tftpd[12727]: tftpd: read: No route to host
I have a weird performance issue with a centos 5 running a nfs server and a rh8 client. I think the fact that it is rh8 client should be downplayed. It is just that with rh8 client the performance degradation seems more clear. See test details below OS in server is Centos 5 x86_64 kernel 2.6.18-92.1.22.el5
1Gb connection between machines File to test over NFS is a 1GB file. First of all I wanted to measure how the network alone performs while using NFS. So in the server side I run a "cat" command on the 1GB file to /dev/null. Please note that the disk read speed is about 98MBs. At this point the file system has the 1GB file cached in memory. In the client side a "cat" on the same file gives me a speed of about 113MBs. It seems then that the bottleneck in this instance is the network and it is very close to nominal speed. So the network performance is really good. (BTW I know that the server got that file from cache because a vmstat or iostat shows no disk activity.)
The second test is reading from disk with no caching involve. In the server I flushed the 1GB file from the memory. For instance by reading another 5GB file and I repeat the same thing as above in the client (a cat on the 1GB file). Now, the server has to go to disk.(vmstat or iostat shows the disk activity). However the performance, now, is about 20MBs, I was expecting something closer so 90MBs. (since the reading speed in the server in the first test showed 98MBs).
This second test was repeated for ext2, ext3, xfs with no significant differences. A similar test using a RH8 NFS server and client gets me close to 60MBs for a 1GB file not cache by the file system in the serverSince network speeds and disk read speeds are not the bottlenecks ... what or where is the limiting factor then?
I've setup a Lamp Server for Testing, The Lamp Server is Up & Running on CentOs 5.5
I am now trying to setup a VSFTP server where local users can upload files to there home directory so that Apache can serve web pages straight from the directories of system user home/accounts giving users the ability to run their own web sites which are hosted off the main server [tutorial here: [url]
So far i have been able to serve/display index.html files from the users home directory [url] but so far i cant upload files to any user home directory, every time i try to upload a file with filezilla i get this error message: 553 Could not create file. Critical file transfer error
I have searched online for similar problems like mine and so far i've tried alot of the solution but none seem to work. I'm confused, dont know where i went wrong, i put the users in a group called ftpusers and here are the permissions on the users (test, ftpuser & testftp) home directory. have a look an tell me where i went wrong :(
Also the root directory where the web pages are served from is called public_html here are the permissions
Here is my vsftp.conf file can someone check it to see if i made any errors in there:
When I try to delete a file in the host directory (and sub-directories), I see the prompt, 'Cannot move file to trash, do you want to delete immediately?'
I googled this issue and see some solutions that require editing fstab, but not sure if that's the right approach in wubi, and not so sure what edits I would make in fstab anyway.
I would like to set both user and group permissions permanently to be 'rwx' (read-write-execute). I would like these rwx settings for all the future files and folders.
I tried umask 002, chmod etc, but they don't set it for future files.
I edited fstab so that my Windows disk partition will be automatically mounted when I log on. However, when I delete a file from said partition, I am told that the item(s) cannot be moved to trash - I can only permanently delete files from the Windows partition. Here is how I configured in fstab: Code: /dev/sda1 /media/Vista ntfs nls=iso8859-1,umask=000 0 0 I suspect I mis-configured the options. Can anyone see an issue?
Now i want to disable my ssh server "permanently",which means it won't run unless i start it after i login.that is,it is disabled at boot time by default. i have asked a similar question before,but i still have some confusions. Say that now the ssh server is running.my system is ubuntu 10.04. code...
The disable|enable API is not stable and might change in the future. the shell gives me a warning:do not match LSB Default-Start values,this API is not stable and ... what does this mean? still it can't disable the server "permanently",ethier. what on earth should i do to solve this?