Debian :: Increase Apt-cache Limit?
Aug 21, 2011There I was, trying to update the package lists in a fresh install of Knoppix, when:
View 8 RepliesThere I was, trying to update the package lists in a fresh install of Knoppix, when:
View 8 RepliesI'm recently receiving error message: Eynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf), E:Error occurred while processing libguile-ltdl-1 (NewFileVer1), Eroblem with MergeList /var/lib/dpkg/status, E:The package lists or status file could not be parsed or opened.'
I have tried to increase the cache by adding the line
It is apparently getting the value from elsewhere.
When adding repositories, I ran up past the default apt cache limit of 25165824. I found a couple of sources - one that treats apt.conf.d as an individual file, and another that treated it as a folder - which my system does.
Following those guidelines, I went into /etc/apt/apt.conf.d/debconf70 to add a line of code that sets the default to 2x and 4x that limit, and... well, apt doesn't seem to recognize the difference.
Here's the code I added at the end:
So, is there somewhere else that I need to change things? Am I completely off on this? I found a year-old thread on this error in the ubuntu forums, where a gentleman who is now a member of the staff simply suggested to the op "take out that debian repository you listed". It kinda negated the premise for me, you know? I'm kinda hoping there's more that I can do.
I'm looking for recommendations on how to increase the nofile limit for a daemon running as other than root. Does anyone else do this? It'd be nice if I could employ /etc/security/limits.conf.
View 2 Replies View RelatedI need to increase the default stack size on Linux. As I know there are usually two ways:
ulimit -s size
/etc/security/limits.conf
The ulimit method only works as long as I am logged in.
limits.conf will work after a restart.
Is there a possible way to increase the limit without restarting?
This is happening on Ubuntu 9.10 serverI'm trying to increase the number of open files allowed for a user. This is for an nginx webserver where the current limit of 1024 is not enough.According to the posts I've found so far, I should be able to put lines into /etc/security/limits.conf like this;
Code:
* soft nofile 4096
* hard nofile 4096
[code]...
I've run a script which I need to parse a lot of data through to a webapp.
I got this error with my ruby script, running through jruby, firing into an apache webapp:
Java/util/Array.java:#### java.lang.OutOfMemoryError: GC overhead limit exceeded (NativeException)
(Where #### was the line number)
I've looked at:
[URL]
I think it suggests I do:
Code:
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0
-XX:CMSIncrementalDutyCycle=10
My java version:
Code:
java version "1.6.0_18"
OpenJDK Runtime Environment (IcedTea6 1.8) (6b18-1.8-0ubuntu1)
OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)
Q1: How do I implement that into the java/util/Arrays.java file?
Q2: Is there a way of temporary making the Java VM/JDK have better handled GC overhead?
I'm working on a few servers running centos and using postfix. I don't know what the exact problem is, but we are having problems with the disk space being maxed out at 100 gigs. What we think the problem is...is that postfix is either caching or logging all the emails we send out. We sent 250k emails (500kb apiece) over the weekend and we were having trouble with that quantity. It seems some of those email were queued up for retry sending...but we didn't have sufficient disk space for that? Something broke - I'm not sure what.
What I want to do is to find and change the config file that has to do with postfix email retrying - possibly limit this (not sure if this will fix my problem). Or, turn off /limit any way that postfix logs/caches emails so that it won't take up all the disk space when queued up for retry... Again, I'm totally lost here (on both what's going on, and how to fix it). I'm not sure what more information is needed to address this problem
I don't understand this error nor do I know how to solve the issue that is causing the error. Anyone care to comment?
Quote:
Error: Caching enabled but no local cache of //var/cache/yum/updates-newkey/filelists.sqlite.bz2 from updates-newkey
I know JohnVV. "Install a supported version of Fedora, like Fedora 11". This is on a box that has all 11 releases of Fedora installed. It's a toy and I like to play around with it.
I was laughing about klackenfus's post with the ancient RH install, and then work has me dig up an old server that has been out of use for some time. It has some proprietary binaries installed that intentionally tries to hide files to prevent copying (and we are no longer paying for support or have install binaries), so a clean install is not preferable.
Basically it has been out of commission for so long, that the apt-get upgrade DL is larger than the /var partition (apt caches to /var/cache/apt/archives).
I can upgrade the bigger packages manually until I get under the threshold, but then I learn nothing new. So I'm curious if I can redirect the cache of apt to a specified folder either on the command line or via a config setting?
I installed squid cache on my ubuntu server 10.10 and it is work fine but i want to know how to make it cache all files like .exe .mp3 .avi ....etc. and the other thing i want to know is how to make my client take the files from the cache in the full speed. since am using mikrotik system to use pppoe for clients and i match it with my ubuntu squid
View 1 Replies View Relatedmy secure log is flooding with these messages..
sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'hard'
Dec 28 22:42:29 yn54 sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'soft'
Dec 28 22:42:29 yn54 sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'hard'
I am reading slab allocator, it defines slab cache, i am quite confuse is it same as hardware cache?
View 2 Replies View RelatedI have a VPS server with 512 MB memory. The php.ini is set so script memory limit = 16 MB. However, I have noticed in my top report, instances like the following:
Quote:
5484 coldclim 25 0 46476 32m 5920 R 0.0 6.4 0:00.93 php
The bold number of 6.4 is the % of sever memory this process is using. 6.4 % of 512 MB of memory is about 32 MB of memory, so it appears that this isn't being limited by php.ini. Am I correct? This leads to the next question: Is there some way to limit the amount of memory a single suphp process can use? (Basically, something like the setting in php.ini which limits suphp processes in the same way.)
I have a home storage of many drives which are seldom accessed, and extremely seldom written to. So, I made several RAID1 arrays with "write-mostly" drives in each one, so that all the drives spin down after a while, and if they are accessed, only one drive in each array has to spin up. This way, I hope to minimize the risk of losing anything due to mechanical shock or wear.
But it was not so easy. First, the write-mostly drives did spin up on every read request due to a bug in the kernel, so I managed to get the fix accepted into the newer kernels (which is my largest contribution to the Linux development so far. See [URL] .....) Now, if I read directly from the array, it works as expected (only the first drive spins up); but if I create a filesystem there (ext4) and read from it, the second drive still spins up anyway. (Does ext4 write something when it reads a file?.. I have it mounted with relatime, so AFAIK it shouldn't...)
Well, I thought, I'll just mount the filesystems read-only and remount them if I need to modify the data there, which is not very often, to say the least. Also, this way the drives don't have to spin up every system shutdown to unmount the filesystems; during a power outage, it used to take a lot of time for them to spin up one by one just to unmount, while my UPS had to survive long enough for them to finish. That's one more problem solved.
But I decided to improve even further. Sometimes I just want to see the contents of those arrays without even accessing some files (for example, when I accidentally click the wrong network drive button in Windows, or when I just want to see if I have something there or not). If I read the directory trees during startup and keep them cached, then those drives won't even have to spin up at those moments! The question is, is there a way to do it? I remember Linus boasting about our kernel being the best at caching the filesystem trees, and I know there is vmtouch [URL] .... which allows us to keep some files cached, but what about the directory tree itself?.. If I simply "find" all the files there, it seems to work, but they get evicted from cache sooner or later...
A friend has a website. the site has been moved to another server. I am inside of a private network.
Problem: when trying to reach his website, my browser keeps connecting to the old server which is on another continent
Windows machines have no problem with this. They connect to the new server without issues.
NOW what is hard to understand: even the PING utility is pinging the old server.
I tried these and none works:
- computer restart
- Clear private data in Iceweasel (has ntg to do with PING, but tried...)
- installed nscd and restarted the service
- modified /etc/resolv.conf to point to another nameserver than modified it back
At the same computer, if I reboot and load Windows, the browser has no problem connecting to the new server.
I try to build live usb-hdd on my Lenny and get this problem:
Code:
# lh_build
..............................................................
[code]...
I'm using Debian Squeeze.
When I invoke apt-cache policy , for example , apt-cache policy zlib1g.
I get the output like:
Code:
And below the line "Version table:" , there is installed package version. I assume 1:1.2.3.4.dfsg-3 is version("epoch"+"upstream version"+"debian revision"), but what does the next "0" means?
I managed to auto-mount NTFS partitions in my server.
But I want to limit access to this partition to a select few.
What's the best way, easiest way to do this.
I am running KDE 4.6 in Debian Testing. Is there a way to increase the sound (i.e. more than the standard 100%)? The current settings with my speakers seem a bit too quiet in some cases.
I found a way to do it in PulseAudio, but I don't think Debian's KDE build is compatible.
I've installed my debian sid about one month ago (first xfce, next gnome) but noticed that it's kind of really slow. The upgrades take ages, launching (and using) firefox takes so much time,... In comparaison to my ubuntu, archlinux (on the same computer) or previous installation of debian there is clearly a problem somewhere.Today I tried to do a "top" sorted by mem usage : 3.5% xulrunner-stub, 2.1% dropbox, 1.4% aptitude (doing upgrade), 1.4% clementine,... nothing terriblebut still I've 2.7Gb or RAM used (more than 50%)
$ free -m
total used free shared buffers cached
Mem: 3967 26851282 0 79 1938
[code]....
i was looking for a way to stop my menus taking a few seconds to load my icons when i first open them and found a few guides suggesting using the gtk-upate-icon-cache command, but with the any colour you like icon theme i'm using (stored in my home folder .icons directory) i kept getting a "gtk-update-icon-cache: The generated cache was invalid." fault i used the inbuilt facility in the acyl script to copy the icons to the usr/share/icons directory and tried the command again, this time using sudo gtk-update-icon-cache --force --ignore-theme-index /usr/share/icons/ACYL_Icon_Theme_0.8.1/ but i still get the same error. i tried with several of the custom icon themes i've installed and only 1 of the first 7 or 8 i tried successfully created the cache.
View 4 Replies View RelatedLearning about the ulimit command, I came across something unexpected..
Checking the root account limits:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
[code]...
Below are example screenshots of Freemind from Ubuntu Lucid and Debian Squeeze
Lucid
Squeeze
You'll notice that there is slightly different quality, it is also applies to pdf documents.
and my question is.. how do i increase rendering quality in debian squeeze?
I have a download cap therefore when I install Debian, I like to have a backup of all downloaded .deb files that I can install from rather than download all over again. From what I've read, it should be a simple case of copying the .deb files to /var/cache/apt/archives and then install using apt-get. But apt-get seems to 'ignore' them. Am I supposed to do something to get apt to use them? Bearing in mind, I need to do this from a console as there will be no DE.
View 5 Replies View RelatedI'm trying to set up an SSD as a cache to my external HDD (which is where my installation of Debian testing/stretch is installed). My installation is using LVM 2. I'm trying to have the SSD cache the entire external HDD, and not just one of the partitions (such as the root or home partitions).
Here are the relevant outputs.
uname -a: (Yes, I'm using the Debian stable kernel with Debian testing.)
Code: Select all3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux
lsblk:
Code: Select allNAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â MAJ:MIN RMÂ Â SIZE RO TYPEÂ MOUNTPOINT
sda                 8:0  0 149.1G 0 diskÂ
sdb                 8:16  0 111.8G 0 diskÂ
sdc                 8:32  0 298.1G 0 diskÂ
├─sdc1               8:33  0  243M 0 part /boot
└─sdc5               8:37  0 297.8G 0 partÂ
 └─sdb5_crypt          254:0  0 297.8G 0 crypt
  ├─mydebianhostname--vg-root  254:1  0 14.3G 0 lvm  /
  ├─mydebianhostname--vg-swap_1 254:2  0 11.5G 0 lvm  [SWAP]
  └─mydebianhostname--vg-home  254:3  0  267G 0 lvm  /home
sr0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 11:0Â Â 1Â Â 25MÂ 0 rom
make-bcache -B /dev/sdc:
Code: Select allCan't open dev /dev/sdc: Device or resource busy
Must I "operate" on this drive via a live session or something.
My cache is full, and I've tried doing "apt-get clean" to no avail. I also can't find any apt.conf file in my system.Here is a screenshot of the error message that pops up when I open Synaptic Package Manager:PS: I'm pretty new to Linux, especially Debian; most of my Unix-like OS experience is with Mac OS X. My Musix installation is in a VMWare machine.
View 1 Replies View RelatedWhat it's the difference between buff(ered) and cache portions of memory?
Code:
I have a webserver with a few users on and i wonder how i can limit the bandwith usage for each user on my server ?
View 1 Replies View RelatedI have a problem with open file limit. The software I'm installing claims "Open file limit (ulimit -H -n) too low (1014), need at least 6311" but when I check the linit I get the following
Code:
# uname -a
Linux server 2.6.32-5-amd64 #1 SMP Mon Mar 7 21:35:22 UTC 2011 x86_64 GNU/Linux
[code]...