Server :: Disable Caching For Certain URLs In Varnish?
Mar 13, 2011
There is this server that is running a lot of websites and runs varnish for caching for performance boosting. But I want to somehow remove certain URLs from caching which change frequently. But I do not want to remove complete domains from caching but certain URLs from the websites. Is there any way to remove those pages from caching?
Is it possible to disable caching of thumbnails in nautilus but still have them on? I tried linking .thumbnails to /dev/null but that just disabled them completely.
how to make removable media (e.g. USB sticks) not have any write caching. I want to prevent data loss when they are removed after file copying appears done but before write caches are written. I'm using Gnome on Squeeze.
I've found suggestions of adding the 'sync' mount option to /system/storage/default_options/vfat/mount_options in the Gnome configuration. However this doesn't seem to completely eliminate write buffering, as the drive activity light continues for several seconds after file copying appears done, and unmounting drives produces a dialog box which says to wait whilst data is written to disk.
I will be relocating to a permanent residence sometime in the next year or two. I've recently begun thinking about the best way to implement a home-based network. It occurred to me that the most elegant solution might be the use of VM technology to eliminate as much hardware and wiring as possible.My thinking is this: Install a multi-core system and configure it to run several VMs, one each for a firewall, a caching proxy server, a mail server, a web server. Additionally, I would like to run 2-4 VMs as remote (RDP)workstations, using diskless workstations to boot the VMs over powerline ethernet.The latest powerline technology (available later this year) will allow multiple devices on a residential circuit operating at near gigabit speed, just like legacy wired networks.
In theory, the above would allow me to consolidate everything but the disklessworkstations on a single server and eliminate all wired (and wireless) connections except the broadband connection to the Internet and the cabling to the nearest power outlets. It appears technically possible, but I'm not sure about the various virtual connections among VMs. In theory, each VM should be able to communicate with the other as if it was on the same network via the server data bus, but what about setting up firewall zones? Any internal I/O bandwidth bottlenecks? Any other potential "gotchas", caveats, issues? (Other than the obvious requirement of having enough CPU and RAM).Any thoughts or observations welcome, especially if they are from real world experience in a VM environment. BTW--in case you're wondering why I'm posting here, it's because I run Debian on all my workstations/servers (running VirtualBox as a VM for Windows XP on one workstation).
I'm using wget to retrieve a long list of URLs, a small proportion of which fail, hence:
Code: wget --input-file=urls.txt Is there a way to log the urls that have failed? Unfortunatley wget does not output the current URL being processed (and then the status), so hard to see grepping the output helping.
Or should I use some alternative like curl, wmget?
I have a fresh fedora 13 install, I managed to browse and setup my phpadmin.....and browse everthing locally. I can not browse the web site from any other machine in my network. All my machines get their IPs from my dhcp (192.168.1.0).I googled and read a thread in this forum, I understood it might be due to SELINUX. I disabled it, rebooted, still have the same behavior, browse my apache locally but not from other machines. I did a telnet from one of my machines using the IP as followstelnet 192.168.1.11 80got the following onnecting To 192.168.1.11...Could not open connection to the host, on port 80: Connect failed.I checked error-log and access_log file, found no hint. I think it should be something related to some fedora systemor firewall or selinux config that is not allowing access to it.
I am trying to setup a Caching-Name Server for my lab but I am unable to locate the named.caching-nameserver.conf under /etc directory. I am trying to use this file as a template.I already checked the /usr/share/doc/bind-* for samples but unable to find it.I am using RHEL5 with bind and bind-chroot packages installed.
Can someone tell me where I can find named.caching-nameserver.conf file? Also, I notice that there isn't /etc/named symbolic link... do I have to create the sym link to /var/named/chroot/etc/bind.conf
my web host runs a linux server, and when i try to load a file in my browser (which i have uploaded in my web space) with non latin words it gives a 404 error (file not found). for example i have uploaded mydomain.com/νεο.html the word "νεο" is non latin. so when i try to reach this document from my browser i get the error.
I have configured Master and Slave DNS server in Red Hat Linux 4 Enterprise. I want to know about what is a Caching Nameserver and in which situation we use it? If there is a master and slave DNS server we can use cache name server as well ?
I found in the following mail thread: [URL] That we can add, netgroup: caching compat and also the netgroup caching rules stanza in nscd.conf But the mail thread is for FreeBSD Unfortunately, I can't get any reference for RHEL 5.x I need to do exactly same stuff. The following line in my nscd.conf was enuf to leave me disheartened, :'(# Currently supported cache names (services): passwd, group,hosts The nsswitch.conf on my system works only with following: netgroup: nis Does neone knows if netgroup caching is supported by nscd.conf.
I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related.So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller.My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working.So that leaves me to think that my caching server is not forwarding properly.
For example, this AD is going to have a naming convention of hostname.mydomain.local.If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local.However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain) Here is my named.conf file:
I am looking into creating a web caching server for myself using fedora 10. I believe I need to use squid for this but it seems to have a lot of features. Basically, all I want for now is to be able to cache web pages that I and my network users use the most, increasing access time and lowering the load on my internet connection. Can squid do this and can someone point in the right direction on an article on how to configure such a thing?
Basically, i have a clustered filesystem using GlusterFS. This is ultimately going to host a very large number of files.
It is mainly used as a storage destination for backups, and historical copies of files.
Remote servers sync using unison every few minutes. A local script will run over the whole filesystem once per hour looking for new files/folders, and files that have been updated based on their timestamp.
99% of filesystem access is browsing the directory structure, listing directory contents and checking the modification times of files. Access to the actual content of a file is minimal. Only a tiny fraction of the filesystem is actually modified from hour to hour.
GlusterFS alone is quite slow when browsing the directory structure. (ie. "ls -Rl /data") The speed of things for actually transferring file content is sufficient for my requirements.
What I need is to vastly improve performance when running operations such as "ls -Rl /data". (/data is the mount point)
I believe the best way to do this is to implement caching. The cache options within GlusterFS are simply not sufficient here.
My first thought was to re-export the GlusterFS mount with NFS, and then mount the NFS share and set the cache on the client to a very long expiry. (like 86400 = 24 hours) It is my understanding that any change made to a file using the mount point will invalidate the cache entry for that file. (it is only mounted in one place, so no changes possible at the back end.)
I did this using the kernel based NFS server, but ran into major problems with the "Stale NFS" errors which from reading is due to a problem related to FUSE that doesnt sound like its going to be fixed soon. Aside from the Stale errors, this did provide a suitable boost in performance.
I tried the beta of GlusterFS that has the integrated NFS server (so presumably, no FUSE) but I could not get this to compile properly on our servers.
Finally, I tried using the Gluster patched version of unfs3 that uses boost to talk to Gluster instead of FUSE. Now this works, but for some reason the NFS client cache doesnt seem to cache anymore.
One last thing that I was looking at is the possibility of running a simple cache layer in front of either GlusterFS or NFS. I believe Cache-FS is the tool for the job but I have been unable to get that to work - I believe it is disabled in my kernel or something. (mount command says cachefs is unknown)
I am running Ubuntu 8.04 on most servers, but have upgraded one to 10.04 to try and get around Kernel limitations. My servers are all 32 bit (I know, not recommended for GlusterFS) and its very difficult for me to change this. (its a live system)
I quite simply need to add a cache for the directory structure information, and then maybe export this with NFS so that it can be mounted on a *single* server. (the cache can be on the server where it is mounted if required, but due to the large size of the cache - it may be better to have a server dedicated for the cache)
I am running GlusterFS 3.0.5 in a replicate/distribute manner.
Is there anything like a persistent caching proxy available in linux for me to configure, ie not public? (persistent meaning the cache remains in hard disk between reboots) Is it possible that it NEVER looks for any update to a page that is available in the cache?
I'm running into a little trouble trying to configure bind as a caching dns server on centos 5.6. for debugging purposes I've got iptables and selinux turned off, but I can't get see the dns service on my local network. on my server itself I can run nmap against it and see that port 53 is open, but if I try it from another computer on my network the port is closed.
I have restart the apache and varnish services. I have also rebooted the server but varnish will not listen on port 80 (or other non default ports). On port 6081 the application works fine. But how can i fix this ?
Source Varnish port 80
I can use the application with the following command
Code: Select allvarnishd -f /etc/varnish/default.vcl -a 0.0.0.0:80.
But why is this not working with the normal config file. Varnish are than listen to port 80. With every server reboot i need than to run this command. So i would like to use the config file.
what i need, I got two servers for about 4000 users and 300 servers and well the guy never setup dns caching right, so im redoing it. Now my goals
1) DNS cache 2) Transparent Squid Cache only 3) Load Balance - at switchlevel
Upgraded Hardrives to SSD 2x32gb each server 4gb of ram 2x Dell poweredge 850's - p4 2.8 (single cores) So any advise , pointers , expeirnces and best ways to do this being both server will do both dns caching and squid! Also is bind9 the best for this?? i seen stuff about DNSmasq what performs better( i dont need DHCP)
I've set up a caching nameserver on my laptop running Fedora 11. The problem with this is that NetworkManager always overwrites the entry that points to the local nameserver. NetworkManager no longer respects /etc/dhclient.conf or at least its scripts run after dhclient.conf. Also it doesn't respect /etc/sysconfig/ network-scripts/ifcfg-* setting of DNS{1.2}.The man page of NetworkManager describes scripts that run in /etc/NetworkManager/dispatcher.d which can be run when interfaces are brought up and down. I've written a script that will put the entry needed for the local nameserver.
I have installed bind 9.3.6 on a centos 5.4 virtual machine. i have successfully installed it and i just want to run it as a caching server but it is not working.I mean when i dig any url it gives connection time out messages .
I checked /var/log/messages on my machine and there were two kinds of messages:
Now there are a lot of errors of the above two kinds only. The only difference in the errors are the ip address that is different.
Over the weekend I upgraded my home PC from Fedora 9 to Fedora 12, and now I'm having problems connecting to the Internet. Basically, I am able to connect to some URLs but not others, and it happens in both Firefox and Konquerer. I am able to connect to url, url, url and url with no problems. However, when I try to connect to slashdot.org, url, fedoraforum.org and rpmfusion.org I can not. All the other Windows PCs in my home using the same 2Wire home portal are able to get to the sites using IE with no problem.
I first suspected a DNS issue, but the "host" command returns a valid IP address for all the URLs that I can not reach. Another symptom is that the following command
Code: su -c 'rpm -Uvh url. url (from rpmfusion.org/Configuration/) also doesn't work when entered at the command prompt. However, when I did "host download1.rpmfusion.org" and edited the command to use the IP address returned instead of "download1.rpmfusion.org" it worked. But then, the next time I ran "yum" it failed because it couldn't find the rpmfusion.org URL in the installed repository entries.
After reading some other threads, I tried disabling avahi-daemon, but that had no effect. I also tried examining /var/lib/dhcpd and /var/lib/dhclient, but neither file existed on my system.
I have a PHP script written that is checking a string to see if it contains a link in it (i.e. a URL). I have the following if statement, that uses 3 possible regular expressions to determine if there is a link or not.
Code: // check if we found a link // links are denoted by strings that: // - contain http:// // - contain www.*.*
[Code]....
I'm not convinced yet that writing a shell script to do this is the best course of action. If someone is capable of doing this with a Perl or a Python script that's fine too. If you want to make it super high performance and write it in assembly
Is there any way to embed a URL to an external web page in the text of a tomboy note, in the same way that other notes are linked to? I know I can just pates the URL into the note and have it link out, but when the link is over a hundred characters long (not kidding) then that stops being an option.
I am currently in a more mixed environment as I desire and I need to mount samba shares because I need to work with the data. I noticed that nautilus does not really mount the shares and some applications cannot deal with smb urls. I searched and found this old thread: [URL]. Is it possible that after all these years, this is still unchanged? To permanently mount on boot time is not an option for me as the drive will not always be available - already changes when I move within the office from fixed to WLAN (e.g. when going to a meeting and vice versa).