Does anyone know of a way of limiting a print-job size from samba?
I know how to limit a print job size form cups, and how to require x amount of free space before accepting a job. I've even dug up how to require x amount of free space for samba to accept a print job, but I can't see how to limit samba to only certain sized jobs.
Someone tried to print a >1G file to my print-server this morning, causing me to have a less relaxed Monday than I had hoped. Because it ran out of space before spooling, it was never limited by cups. Because I had to get rid of it ASAP so people could get work done, I have no idea who's it was, or where it came from. Scouring logs didn't give me any good leads either.
I have 2 directories in my home folder that I would like to set a size limit on. The directories are ~/backup and ~/temp. Is there an easy way to limit the size of a directory without having to make partitions?
I have been trying to increase the message_size_limit on my Debian 2.4.26 box with postfix 2.3.8. For example, I set message_size_limit and mailbox_size_limit to 104857600 (100m) and restart postfix. Running postconf -n confirms that it has changed. However when I send a test message it kicks it back saying the message size limit is 16777216 (16m, which is, incidentally, the default value of the berkeley_db_create_buffer_size parameter)
Last weekend i have increased the open file size (ulimit -n) for the application user id i have update the limits.conf file with necessary inputs restarted the service and the server as well, when i check the ulimit value for the specific user by switching user from other user it shows the new value (10240) but if i login directly using the application id the ulimit value shows as 1024 which one is the default one.
At present we are using windows print server getting user name and authenticated from domain server. I need your suggestion to configure linux printer server and how to share the printer to users and how to limit the user in taking printouts.
I'm new to setting up Linux Servers. I've setup a Ubuntu 10.10 Server along with CUPS and I'm using Webmin to talk to the server. I have a HP PSC 1315 Multifunction printer connected via usb to the server. Using the CUPS web interface I am able to get the server to detect the connected printer and it identified the HP PSC 1310 Series drivers.
When I printer a test page from the server's screen the print job goes through ok and the size was about 5k.
I then setup a samba share to allow my Windows 7 machine to share the printer. Windows 7 is able to pick up the shared printer correctly and I used the default HP 1310 Series drivers. When I tried to send a test page to the printer, that single page ended up being 3887kb and I also tried printing out a single paged word document which ended up being over 7MB.
I'm trying to set up quota limit in samba-3.0.33-3.15.el5_4.1 in CentOS 5.5, by means of the module vfs objects. In the samba howto  I found a very brief explanation, but it isn't working for me. The basic idea is to setup a user called 'quota2g' (uid 499) and setup the [homes] share, as it comes by default, to enforce the quota on each user share.quota2g:x:499:499:User quota 2GB:/home/quota2g:/bin/bash
I am having FC11 with an HP prineter attached my firewall is disabled I trying to print from my laptops after I have setup samba and shared the printer , It was working fine when I was installing FC4 and FC5 I am not sure what is missing when I tried to print from the XP box I got "Test pge failed to print" error what I have really noticed in the xp and vista box is that when I go to the printer settings inside control panel , pressing the ports tab and checking to what port I am printing I see that the port "\samba-serverprinter" is not created there this is the log
The bold number of 6.4 is the % of sever memory this process is using. 6.4 % of 512 MB of memory is about 32 MB of memory, so it appears that this isn't being limited by php.ini. Am I correct? This leads to the next question: Is there some way to limit the amount of memory a single suphp process can use? (Basically, something like the setting in php.ini which limits suphp processes in the same way.)
We use VxVM and VxFS on HP-UX and Ie used them in the past on Solaris.ve found they are available as Storage Foundation v5.1 for 64 bit RHEL.(In fact there�s even a BASIC version for free on 2 processor systems). Previously wed run into a 2 TB limit for filesystems on the older versions we have on HP-UX. The data sheets at Symantec are pure marketing fluff. Does anyone know what the filesystem size limit is for 5.1 on Linux?
Why is it in Linux that there is a stack size set by default? And why is it so small? (My system is set to 8192 kbytes.) And why is there a default limit on the stack size when the max memory and virtual memory size are, by default, unlimited? (Aren't they both fed from the same place ultimately?)
Reason I ask: I want to use recursive functions in my programming a lot more. Problem is, if the language (or implementation) doesn't happen to support tail-call recursion, then I can be pretty well certain that the first huge problem that gets thrown at my function is going to kill my program because the stack size limit is going to be quickly reached. Obviously, I can change the stack size limit for my own computers, but it doesn't feel so great knowing that most of the people who copy and execute my code will have probably have overlooked this. Anyway, does anyone know: is this small default stack size limit just one of those historical artifacts, or is there some technical reason for it?
How do you put a limit in place for file caching in Suse 11.4?
My pc becomes usable on a regular basis with minimum cpu usage because I can't open new applications There are no error messages etc, the new apps just don't open.
free -m shows the vast majority of my memory is used by cache free -m total used free shared buffers cached Mem: 3185 1048 2137 0 38 503 (this becomes max) -/+ buffers/cache: 506 2679 Swap: 2055 0 2055
I've done ... echo 3 > /proc/sys/vm/drop_caches
Sometimes an app will open after this but the system becomes unstable, locking up regularly.
I not sure why the default max is 100% file cache but I'd to put a sane file cache limit in place, like 40% or something. I've put limits in place in the past using a percentage and I've poked arround but i don't see the setting.
I have a large file (deflated size: 602191947)that is not saved in my Ubuntu One account. On sync'ing the file is being uploaded, and eventually reaches 602191947 - and then nothing more happens to this file - but sync'ing the following files in the queue goes on with success. I have tried manual upload with the same result. The file is still being marked as 'uploading' even after several tries and log ins/log outs, and reboots. So I was just wondering whether there is a file size limit - can't seem to find information regarding this.
Is there some way to limit the download size of updates for ubuntu? At the moment, update manager shows that I have some 300 MB worth of downloads. I can't find any way to deselect many updates at once either.
I really don't understand what's happening.I make a 3.5tb RAID array in Disk Utility, yet it makes it so that one partition is 3tb and the other is 500 gigs free!Why is that? Ext4 can do huge partition sizes I thought.
a possibly preposterous question. I am aware that you can designate a swap file or swap partition on your hard drive that linux uses as "memory". Suggested sizes for the swap file that I've seen range up to about 1024MB. Is there a limit to the swap file size that you can set?Basically I am running a perl script that processes a massive B) file (DNA sequence data), etc, and requires around 48 GB of memory to run, maybe a bit less. So, would it be possible to set a swap file to a massive, ridiculous size (~60GB oratever) and successfully run such a script on a desktop?Yes, I am aware that it would massively ow down the process. The thing is, if the perl script normally completes in about half an hour, and I can get it working on a desktop, I don't mind if it takes days or weeks to complete. I really don't. That's because it takes days or weeks to get access to a computer with the required grunt to do it.So, is this a stupid idea? Is it even possible? If so, given a perl script that normally completes in a half hour on a 48G system, if you do this, would it take days? weeks? decades
I have a 1TB external hard drive. I would like to create in it 10 folders:
I would then like to permanently mount each folder to its machine (I have 10 machines connected through a switch, so each machine will have a folder that is mounted to ONE of the 10 folders in the external hard drive).
My questions: (1) Is this a good configuration? are there better ideas to give individual machines more space without replacing their hard drive? (2) How do I limit each one of the folders ('folder1', 'folder2', ...., 'folder10') to a size of 100 [GB]? I don't want one folder (say, 'folder1') to grow in size and 'steal' the space designated to the other folders.
I've noticed that for files longer than about 8000 lines that gedit has problems opening the file. Was gedit not designed for long files or is there another problem? The same thing also happens on complicated html files. So I hope there is a way to fix this.
I have a self-made application running on a small embedded Linux device (which should not matter) using syslog to output some error, warning or debug logs.There is a "better" syslog daemon installed, called syslog-ng, which have some more features,t I miss a very important one:How to limit the size of the logfiles to some dedicated megabytes. I was able to create rotating logfiles with the configuration in syslog-ng.conf:
Does Recordmydesktop have a file size limit? I'm considering using the Zero compression setting to keep CPU usage down, but I don't want to run up against a 2GB or 4GB file size limit. While I know some filesystems impose this limit, most screen recorders I've used have a 2GB or 4GB limit when recording, regardless of the filesystem.Is this an issue with Recordmydesktop
The website creates a RPG character through a traditional Wizard. It calls itself with a hidden variable being the page number and tests which page and returns the page data with the page incremented.
Each page should be treated as a seperate page and so would be unique. I am echoing the contents of POST to the top of the page and so I can see variables being returned. When I get data from an Ajax query from page three it saves the data (23 post fields of no more than 25 characters for each field). Page four does the same but with less fields - but it is NOT returning the data - only four fields being those that were originally posted.
I cut/paste the function from section three to section four and changed the displayed text and the names of variables to test so there are no code errors, since page three works and is saved to a database.
So the only option is that there is a PHP or Apache2 issue when POST variables are returned? I am completely out of ideas as to why this would even be an issue or how it could possibly appear.
Is the number of variables an issue? This page is less than the previous page.... And the form is POSTed...
PS: I am getting NOTICE errors from PHP being the POSTed variables that are not displayed/returned... I used:
error_reporting (E_ALL ^ E_NOTICE);
to stop these form being reported but do I need to test each one? PPS: Using If Isset($_POST['xxx']) does NOT allow that variable through...
PS: I have the default Ubuntu 10.04 Apache2 with all the ubuntu 10.04 updates...
I m using squid 2.7 Stable 9 and Dansguardian 184.108.40.206, i have compiled both squid and dansguardian, i have enabled follow_x_forwarded_for in squid to make clients IPs visible to squid, i have also set x_forwarded_for=on in dansguardian, this is working fine and clients ips are visible to squid. Now i want to set down-loadable file size limit upto 50 MB in squid by using the acl reply_body_max_size 52428800 allow mynetwork for every user except few users the above acl is not working properly. mynetwork is our private network which is 192.168.0.0/16.
When i set the acl reply_body_max_size 52428800 allow localhost . it works fine but only for localhost. I want to allow upto 50 MB down-loadable file size to every user in my network except a few users whom will have access upto 500 MB down-loadable file size.
I'm trying to copy a 7.8GB tar.gz file to an external hard drive via command line. It gets to an even 4GB and stops, and gives an error that says "file size limit exceeded." I edited some file at /etc/security/limits.conf to look like: "root hard fsize 10024000" but that didn't do anything at all. Yes, I am copying this as root.
using kubuntu 9.04 on AMD 64,working with ISPconfig panel.I have postfix configured and have no problem getting mails with small attachments, but when they pass certain size I don't get them.Where can I configure this?