I am currently working on a project at work to do caching across a windows network (web traffic thorugh squid and windows updates with WSUS). It is proving to be an interesting learning project.This got me thinking about my home network. Currently I am running 8 Fedora Computers (with the very real possiblity of more in the forseeable future).
Oviously running "yum update" on every computer and having them download these updates directly on each computer eats a lot of bandwidth on the external link. I would like to know if there is a way to set up a proxy server (or something else for that matter) to cache yum downloads. This way when I deploy a new machine, or run updates, I can save time and bandwidth by only downloading the packages too my LAN once, instead of nearly a dozen times.
I am looking into creating a web caching server for myself using fedora 10. I believe I need to use squid for this but it seems to have a lot of features. Basically, all I want for now is to be able to cache web pages that I and my network users use the most, increasing access time and lowering the load on my internet connection. Can squid do this and can someone point in the right direction on an article on how to configure such a thing?
ran an internet cafe and last week my windows server got fried because of power surge. Now i got Fedora 14 running on another PC and i want to set it up as a full caching proxy server, so other computers can connect through it to the internet. I have 2 network cards inside.I'm really new to Linux and now learning my way around. I managed to install squid but don't know how to configure it to suit the purpose above
I want to configure DNS Server on Fedora14. So I install caching-nameserver cause any template files.I can't install caching-nameserver on my Fedora14 by this command: [but i can do it on Fedora5]
I just upgraded to Fedora 13, with emacs 23.1. Now when I edit a .gpg (encrypted) file, emacs doesn't cache the passphrase, so when I save the file emacs demands that I repeat the passphrase twice.Previously, the following line in .emacs made it cache the passphrase:
Code:
(setq epa-file-cache-passphrase-for-symmetric-encryption t) This is supposed to work, according to the documentation [URL], but in Fedora 13 emacs it seems to have stopped working.
I've set up a caching nameserver on my laptop running Fedora 11. The problem with this is that NetworkManager always overwrites the entry that points to the local nameserver. NetworkManager no longer respects /etc/dhclient.conf or at least its scripts run after dhclient.conf. Also it doesn't respect /etc/sysconfig/ network-scripts/ifcfg-* setting of DNS{1.2}.The man page of NetworkManager describes scripts that run in /etc/NetworkManager/dispatcher.d which can be run when interfaces are brought up and down. I've written a script that will put the entry needed for the local nameserver.
I just switched from XUBUNTU to fedora 10 and have to say I will most likely change back. Unless someone can tell me how to speed up the internet connection in fedora. I have tried wireless and wired and it doesn't matter which I use the download rate is painfully slow. With wired I get a meg every minute. It really hurts to watch the monitor.Downloads for three seconds then takes a break for eight seconds then downloads for three more seconds and this goes on and on. I have a very fast connection and am used to getting amazing unbroken streams of data at no less than 1 meg every 2 seconds. I hope this is a rectifiable situation.
I keep on downloading tar.gz files into my downloads folder and i cant do anything with them. what i need to do to install the file so i can use it? An example, i am trying to install Frets on Fire, and am failing bad.
I was trying to install a program and then I tried to mv comman (which I probably did wrong) but to make it short I am pretty sure I deleted the directory. I made another Downloads directory using mkdir but whenever I download anything now I have no idea where it goes.
I have a fresh install of f14. Using an Intel dp55wb mobo w/ integrated nic. I have access to network. Have access to internet. When downloading large files (f14.iso or ..... vid) after ~35 secs, data stream drops to 0. Other computers on network do not have problem. F14 computer is wired into router. Have tried a Biostar mobo w/ integrated nic w/ same results. Have changed patch cable. Have changed MTU's both higher and lower from default of 1500, no change.
I don't understand this error nor do I know how to solve the issue that is causing the error. Anyone care to comment?
Quote:
Error: Caching enabled but no local cache of //var/cache/yum/updates-newkey/filelists.sqlite.bz2 from updates-newkey
I know JohnVV. "Install a supported version of Fedora, like Fedora 11". This is on a box that has all 11 releases of Fedora installed. It's a toy and I like to play around with it.
Almost every other time I run yum update, I'm getting mirror problems - this is losing all my confidence in Fedora for several reasons: a) it's wasting time - what should be 1-2 minute updates are taking 5-10 minutes and often (like tonight) not even completing b) since it keeps hanging and switiching mirrors during downloads I wonder if the updates are coming thru okay c) the fact this is not being seriously addressed makes me wonder re Fedora quality. Add to that the constant and real annoying kernel errors popping up re network manager and sierra wireless (even when my sprint card is not plugged in).
Just upgraded from FC11 to 13 a few days ago and came across a wierd problem with firefox. When downloading files thru firefox, Firefox starts consuming more and more memory/cpu cycles until the system hangs. This only happens when downloading thru firefox. Doesnt take too long either. Otherwise everything else works fine.
Quote:
Linux coffee.athome.net 2.6.33.4-95.fc13.x86_64 #1 SMP Thu May 13 05:16:23 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux
I am currently waiting about 10min already for Fedora to finish downloading whatever-it-is-it-downloads after issuing a simple 'Query'. This is bearable when my BB-connection is performing. Unfortunately, like right now, it isn't - between 10-50kB/s - and the wait is excruciating!
So, why does Fedora seem to download the complete files database at least once every 24hrs if there's an update/query issued, and not just the 'differences'? And how do I prevent it from doing so? Ubuntu arguably has access to many more files but never pulls this stuff. Fedora would appear to be completely unusable without a broadband connection! And in the time it has taken me to type this, Fedora continues to download its update at a rate of 10-30kB/s...
I'm trying to upgrade to F11, and I'm having trouble. I attempted to download the x86_64 DVD .iso image by bit torrent, and it seemed okay, but when I started the installation the DVD failed the initial integrity check. I tried a second time with another DVD and got the same result.I tried running md5sum on the .iso image, but the response did not match what was in the CHECKSUM file that came with the .iso image -- should it? I tried downloading a live DVD image for comparison and found the same result - the response to the md5sum command did not match what was listed in the CHECKSUM file.Should these checksums match, or am I comparing apples and oranges? I thought the bit torrent client was supposed to check the files, but I'm not sure about that.
I am trying to setup a Caching-Name Server for my lab but I am unable to locate the named.caching-nameserver.conf under /etc directory. I am trying to use this file as a template.I already checked the /usr/share/doc/bind-* for samples but unable to find it.I am using RHEL5 with bind and bind-chroot packages installed.
Can someone tell me where I can find named.caching-nameserver.conf file? Also, I notice that there isn't /etc/named symbolic link... do I have to create the sym link to /var/named/chroot/etc/bind.conf
I'm currently copying a large number of files over a network. To monitor progress, I tried running watch du. However, the output never changed (or not much, not sure). find . -type f | wc -l always gives me the same number of files as does ls -R. It seems, these programs use caching, which is, in general, a good thing. Does anyone know, though, how cache usage could be controlled? I'm on an Archlinux system and I'm working on an ext4 fs on an encrypted hd.
I have configured Master and Slave DNS server in Red Hat Linux 4 Enterprise. I want to know about what is a Caching Nameserver and in which situation we use it? If there is a master and slave DNS server we can use cache name server as well ?
I found in the following mail thread: [URL] That we can add, netgroup: caching compat and also the netgroup caching rules stanza in nscd.conf But the mail thread is for FreeBSD Unfortunately, I can't get any reference for RHEL 5.x I need to do exactly same stuff. The following line in my nscd.conf was enuf to leave me disheartened, :'(# Currently supported cache names (services): passwd, group,hosts The nsswitch.conf on my system works only with following: netgroup: nis Does neone knows if netgroup caching is supported by nscd.conf.
Prior to 10.10 if I enter my password to gain administrative permissions, for example to check for updates in Update Manager, the escalated permissions remain in effect for a period of time. If I close out of Update Manager and invoke the Synaptic Package Manager within a couple of minutes I am not asked for my passwordDoes anyone know if the was SUPPOSED to change in 10.10? I have reproduced the phenomenon in both the 32 and 64 bit versions of 10.10.
I was wondering if a standard Debian 8 system with Gnome desktop does any kind of local dns caching, and if so, what the command is for clearing it. (Assuming I haven't purposely installed any DNS server software.)
I found multiple posts on the Web about unix DNS caching, but with widely different answers across distributions and across time.
I have Ubuntu 9.10 PC on my home network acting as a VPN gateway. It is using vpnc & iptables to provide access to the remote network - other computers on my local network have routing rules in place to go via the Ubuntu gateway if trying to reach an IP on the remote network. This works just fine, except DNS lookups for names on the remote network don't work.
I'm trying to solve this by using Bind9 on the gateway, so it can act as DNS for the local network. I don't want to create excess VPN traffic or load on the remote DNS, so I want the gateway to forward the lookup to my ISPs DNS first and if the name is not found then try the remote network DNS. Is this possible, or is there another (better) way around this? The Bind9 configs seem to admit multiple DNSs, but use them in a failover sense - only using secondary DNSs when the first one in the list is not reachable at all.
How do i exclude some URL from the proxy caching at squid.conf as i dont wont to cache ....., mediacafe etc, I know i can block site by create a file like bad-sites.acl, then adding it to squid.conf but that blocks it.
Is it possible to disable caching of thumbnails in nautilus but still have them on? I tried linking .thumbnails to /dev/null but that just disabled them completely.
Basically, i have a clustered filesystem using GlusterFS. This is ultimately going to host a very large number of files.
It is mainly used as a storage destination for backups, and historical copies of files.
Remote servers sync using unison every few minutes. A local script will run over the whole filesystem once per hour looking for new files/folders, and files that have been updated based on their timestamp.
99% of filesystem access is browsing the directory structure, listing directory contents and checking the modification times of files. Access to the actual content of a file is minimal. Only a tiny fraction of the filesystem is actually modified from hour to hour.
GlusterFS alone is quite slow when browsing the directory structure. (ie. "ls -Rl /data") The speed of things for actually transferring file content is sufficient for my requirements.
What I need is to vastly improve performance when running operations such as "ls -Rl /data". (/data is the mount point)
I believe the best way to do this is to implement caching. The cache options within GlusterFS are simply not sufficient here.
My first thought was to re-export the GlusterFS mount with NFS, and then mount the NFS share and set the cache on the client to a very long expiry. (like 86400 = 24 hours) It is my understanding that any change made to a file using the mount point will invalidate the cache entry for that file. (it is only mounted in one place, so no changes possible at the back end.)
I did this using the kernel based NFS server, but ran into major problems with the "Stale NFS" errors which from reading is due to a problem related to FUSE that doesnt sound like its going to be fixed soon. Aside from the Stale errors, this did provide a suitable boost in performance.
I tried the beta of GlusterFS that has the integrated NFS server (so presumably, no FUSE) but I could not get this to compile properly on our servers.
Finally, I tried using the Gluster patched version of unfs3 that uses boost to talk to Gluster instead of FUSE. Now this works, but for some reason the NFS client cache doesnt seem to cache anymore.
One last thing that I was looking at is the possibility of running a simple cache layer in front of either GlusterFS or NFS. I believe Cache-FS is the tool for the job but I have been unable to get that to work - I believe it is disabled in my kernel or something. (mount command says cachefs is unknown)
I am running Ubuntu 8.04 on most servers, but have upgraded one to 10.04 to try and get around Kernel limitations. My servers are all 32 bit (I know, not recommended for GlusterFS) and its very difficult for me to change this. (its a live system)
I quite simply need to add a cache for the directory structure information, and then maybe export this with NFS so that it can be mounted on a *single* server. (the cache can be on the server where it is mounted if required, but due to the large size of the cache - it may be better to have a server dedicated for the cache)
I am running GlusterFS 3.0.5 in a replicate/distribute manner.
There is this server that is running a lot of websites and runs varnish for caching for performance boosting. But I want to somehow remove certain URLs from caching which change frequently. But I do not want to remove complete domains from caching but certain URLs from the websites. Is there any way to remove those pages from caching?
Is there anything like a persistent caching proxy available in linux for me to configure, ie not public? (persistent meaning the cache remains in hard disk between reboots) Is it possible that it NEVER looks for any update to a page that is available in the cache?
I want to compile a Source to RPM with no Cache option. I have tried it with enable-storeio=null option in squid.spec file. After installing the compiled RPM, seen that it is making squid spooling directories and caching the objects, working fine.
I'm running into a little trouble trying to configure bind as a caching dns server on centos 5.6. for debugging purposes I've got iptables and selinux turned off, but I can't get see the dns service on my local network. on my server itself I can run nmap against it and see that port 53 is open, but if I try it from another computer on my network the port is closed.