Server :: New Open File Size Limit Is Getting Reflected In The Specific User?
May 16, 2011
Last weekend i have increased the open file size (ulimit -n) for the application user id i have update the limits.conf file with necessary inputs restarted the service and the server as well, when i check the ulimit value for the specific user by switching user from other user it shows the new value (10240) but if i login directly using the application id the ulimit value shows as 1024 which one is the default one.
I was just testing specifying limit on file size to a user and have added the following to /etc/security/limits.conf bob soft fsize 100 This basically should have said not to allow bob to create anyfile greater than 100Kb in size.
But the interesting thing is, if bob already has any file which is greater than 100Kb in size, it even doesn't allow to log him into the system both from console and SSH. Also nothing is logged in logs.. How do I configure it so that, bob can login to the system even though he has any file greater than 100Kb (but doesn't allow him to create file which are greater than 100Kb) ??
I have a problem with open file limit. The software I'm installing claims "Open file limit (ulimit -H -n) too low (1014), need at least 6311" but when I check the linit I get the following
Code: # uname -a Linux server 2.6.32-5-amd64 #1 SMP Mon Mar 7 21:35:22 UTC 2011 x86_64 GNU/Linux
I'd like to limit login attempts for specific user. I've found information in manpages: [URL]but I'm not sure if this '@' is purposly there, so would be that correct?
I have a VPS server with 512 MB memory. The php.ini is set so script memory limit = 16 MB. However, I have noticed in my top report, instances like the following:
The bold number of 6.4 is the % of sever memory this process is using. 6.4 % of 512 MB of memory is about 32 MB of memory, so it appears that this isn't being limited by php.ini. Am I correct? This leads to the next question: Is there some way to limit the amount of memory a single suphp process can use? (Basically, something like the setting in php.ini which limits suphp processes in the same way.)
I have 2 directories in my home folder that I would like to set a size limit on. The directories are ~/backup and ~/temp. Is there an easy way to limit the size of a directory without having to make partitions?
I have a large file (deflated size: 602191947)that is not saved in my Ubuntu One account. On sync'ing the file is being uploaded, and eventually reaches 602191947 - and then nothing more happens to this file - but sync'ing the following files in the queue goes on with success. I have tried manual upload with the same result. The file is still being marked as 'uploading' even after several tries and log ins/log outs, and reboots. So I was just wondering whether there is a file size limit - can't seem to find information regarding this.
a possibly preposterous question. I am aware that you can designate a swap file or swap partition on your hard drive that linux uses as "memory". Suggested sizes for the swap file that I've seen range up to about 1024MB. Is there a limit to the swap file size that you can set?Basically I am running a perl script that processes a massive B) file (DNA sequence data), etc, and requires around 48 GB of memory to run, maybe a bit less. So, would it be possible to set a swap file to a massive, ridiculous size (~60GB oratever) and successfully run such a script on a desktop?Yes, I am aware that it would massively ow down the process. The thing is, if the perl script normally completes in about half an hour, and I can get it working on a desktop, I don't mind if it takes days or weeks to complete. I really don't. That's because it takes days or weeks to get access to a computer with the required grunt to do it.So, is this a stupid idea? Is it even possible? If so, given a perl script that normally completes in a half hour on a 48G system, if you do this, would it take days? weeks? decades
I've noticed that for files longer than about 8000 lines that gedit has problems opening the file. Was gedit not designed for long files or is there another problem? The same thing also happens on complicated html files. So I hope there is a way to fix this.
I have a self-made application running on a small embedded Linux device (which should not matter) using syslog to output some error, warning or debug logs.There is a "better" syslog daemon installed, called syslog-ng, which have some more features,t I miss a very important one:How to limit the size of the logfiles to some dedicated megabytes. I was able to create rotating logfiles with the configuration in syslog-ng.conf:
Does Recordmydesktop have a file size limit? I'm considering using the Zero compression setting to keep CPU usage down, but I don't want to run up against a 2GB or 4GB file size limit. While I know some filesystems impose this limit, most screen recorders I've used have a 2GB or 4GB limit when recording, regardless of the filesystem.Is this an issue with Recordmydesktop
I'm trying to copy a 7.8GB tar.gz file to an external hard drive via command line. It gets to an even 4GB and stops, and gives an error that says "file size limit exceeded." I edited some file at /etc/security/limits.conf to look like: "root hard fsize 10024000" but that didn't do anything at all. Yes, I am copying this as root.
Using getrlimit I am setting the core file size to be RLIM_INFINITY. But still the core file is not being generated,although in /var/log/messages it says a core is being generated
I have a command line server that logs to stdout, which I start along the lines of ./server > log.txt
What I want to do is limit the size of log.txt, without modifying the server.
I am assuming there must be some kind of tool already that lets me do this, something like where I can pass in my server, the output file and a size limit? If so, can anyone enlighten me?
I downloaded pdftk 1.41 fromand installed on Red Hat Enterprise Linux 4, 32 bitI am primarily using this utility to uncompress pdf files to remove the 'Flate' compressionIt works good with small pdfsHowever, when i use to uncompress pdf files of size 35MB or more, the uncompressed output file grows up to 2GB and then the uncompression fails with error:"File size limit exceeded"I can concatenate two files with output file size upto 3GB in size, so 2GB is not the limitation at the linux level
Does anyone know of a way of limiting a print-job size from samba?
I know how to limit a print job size form cups, and how to require x amount of free space before accepting a job. I've even dug up how to require x amount of free space for samba to accept a print job, but I can't see how to limit samba to only certain sized jobs.
Someone tried to print a >1G file to my print-server this morning, causing me to have a less relaxed Monday than I had hoped. Because it ran out of space before spooling, it was never limited by cups. Because I had to get rid of it ASAP so people could get work done, I have no idea who's it was, or where it came from. Scouring logs didn't give me any good leads either.
I have been trying to increase the message_size_limit on my Debian 2.4.26 box with postfix 2.3.8. For example, I set message_size_limit and mailbox_size_limit to 104857600 (100m) and restart postfix. Running postconf -n confirms that it has changed. However when I send a test message it kicks it back saying the message size limit is 16777216 (16m, which is, incidentally, the default value of the berkeley_db_create_buffer_size parameter)
I'm trying to limit access to port 8443 on our server to 2 specific IP addresses. For some reason, access is still being allowed even though I drop all packets that aren't from the named IP addresses. The default policy is ACCEPT on the INPUT chain and this is how we want to keep it for various reasons I wont get into here. Here's the output from iptables -vnL
[Code]...
Note the actual IP we are using is masked here with 123.123.123.123. Until I can get everything working properly, we're only allowing access from 1 IP instead of 2. We can add the other one once it all works right. I haven't worked with iptables very much. So I'm quite confused about why packets matching the DROP criteria are still being allowed.
Fedora 12 gcc 4.4.1 I am doing some programming, and my program gave me a stack dump. However, there is no core file for me to examine.
So I did: Code: ulimit -c unlimited and got this error message:
Code: bash: ulimit: core file size: cannot modify limit: Operation not permitted I also tried setting ulimit to 50000 and still got the same error. The results of ulimit -a:
Code: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited
I have some domains on a VPS server. Typical account memory usage for all domains runs at 50% of available, but I have a problem. One domain is causing me trouble because intermittently traffic will spike on that domain, causing so many requests within 1 min that I exceed my memory allocation for my entire VPS package. Apache is then killed but the virtualization software and Apache must then be restarted.
A sample snippet from tops right before the sever went down would like like this:
All of that memory usage adds up. I would like to "throttle" the number of processes that user/domain can run. I think this would be a quick and easy way to keep the domain from taking down my entire VPS. My understanding is that I could do this with the /etc/security/limits.conf file.
Is that correct?
I have never done this before. Do I want to set a hard or soft limit? I think if I wanted to limit the number of processes for "coldclim" to 15 I would add a line to limits.conf like this:
Code:
Assuming that is correct, can anyone tell me how the website would respond once it reached its limit? Would visitor queries become sluggish, or would the website not come up for them at all?
I ran into a user today that indicated that their company only allows them to log in through a terminal session once (no multiple logins). On second try their login window terminates. They are using putty.Is this being accomplished through PAM or sshd ( or some other method)?
I'm researching about symbolic links been used with samba / CIFS:I'd like that the user that uses a MS-Windows OS could see my shared folder on CentOS 5 and the symbolic links that are inside this folder. Well, it works but, the user will see that the size of the file is bigger than the real file. Apparently, CIFS gets the size of the symbolic link (aproxim.32K) and add it to the size of the file.Example 1: 100KB file, used with shared folder, MS-Windows's user will see 100KBExample 2: 100KB file, used with symbolic link inside a shared folder, MS-Windows's user will see 132KB. (Sym link + size of file)Is there a way to allow the user only see the size of the file, and not the file + symbolic links ?
recently i rent a xen vps intended to setup a PPTPD vpn server for me and my friends. so we can by-pass the great firewall in china and get back on ....., facebook and stuff. i have already setup the server and i can connect to it without any problem. but i still want to do some further configuration the server:
1. i want to limit the bandwidth to 400k/s per connection. 2. i also want to limit the max connection per user a/c
i have some thoughts on the 2nd requirement. in the user configuration file of /etc/ppp/chap-secret, you can specify the range of ip the user can get, does it limit the max connection per user a/c? or they can connect anyway, just every now and then a box pop up says conflict in IP address?
I've found that changes configured in my hosts file are not being reflected in my web browsers, but it is in the shell. For example... If I put the following in my /etc/hosts file
Code: 123.456.789.000 server server.dom.com I get a successful response from running ping in the shell
[Code]...
I'm using Ubuntu 9.10. I have to make regular changes to my hosts file to test services, so this is quite a pain.