Prior to 10.10 if I enter my password to gain administrative permissions, for example to check for updates in Update Manager, the escalated permissions remain in effect for a period of time. If I close out of Update Manager and invoke the Synaptic Package Manager within a couple of minutes I am not asked for my passwordDoes anyone know if the was SUPPOSED to change in 10.10? I have reproduced the phenomenon in both the 32 and 64 bit versions of 10.10.
My GUI for backtrack is not working. I want to connect to internet. I gave the required static ip to my machine. but the problem is I have to give username and pasword through the browser before accessing the internet. Now here , i dont have any browserHow to give the log in credentials through command line in backtrack machine?
I am trying to setup a Caching-Name Server for my lab but I am unable to locate the named.caching-nameserver.conf under /etc directory. I am trying to use this file as a template.I already checked the /usr/share/doc/bind-* for samples but unable to find it.I am using RHEL5 with bind and bind-chroot packages installed.
Can someone tell me where I can find named.caching-nameserver.conf file? Also, I notice that there isn't /etc/named symbolic link... do I have to create the sym link to /var/named/chroot/etc/bind.conf
I have Ubuntu 9.10 PC on my home network acting as a VPN gateway. It is using vpnc & iptables to provide access to the remote network - other computers on my local network have routing rules in place to go via the Ubuntu gateway if trying to reach an IP on the remote network. This works just fine, except DNS lookups for names on the remote network don't work.
I'm trying to solve this by using Bind9 on the gateway, so it can act as DNS for the local network. I don't want to create excess VPN traffic or load on the remote DNS, so I want the gateway to forward the lookup to my ISPs DNS first and if the name is not found then try the remote network DNS. Is this possible, or is there another (better) way around this? The Bind9 configs seem to admit multiple DNSs, but use them in a failover sense - only using secondary DNSs when the first one in the list is not reachable at all.
How do i exclude some URL from the proxy caching at squid.conf as i dont wont to cache ....., mediacafe etc, I know i can block site by create a file like bad-sites.acl, then adding it to squid.conf but that blocks it.
Is it possible to disable caching of thumbnails in nautilus but still have them on? I tried linking .thumbnails to /dev/null but that just disabled them completely.
I am currently working on a project at work to do caching across a windows network (web traffic thorugh squid and windows updates with WSUS). It is proving to be an interesting learning project.This got me thinking about my home network. Currently I am running 8 Fedora Computers (with the very real possiblity of more in the forseeable future).
Oviously running "yum update" on every computer and having them download these updates directly on each computer eats a lot of bandwidth on the external link. I would like to know if there is a way to set up a proxy server (or something else for that matter) to cache yum downloads. This way when I deploy a new machine, or run updates, I can save time and bandwidth by only downloading the packages too my LAN once, instead of nearly a dozen times.
I'm currently copying a large number of files over a network. To monitor progress, I tried running watch du. However, the output never changed (or not much, not sure). find . -type f | wc -l always gives me the same number of files as does ls -R. It seems, these programs use caching, which is, in general, a good thing. Does anyone know, though, how cache usage could be controlled? I'm on an Archlinux system and I'm working on an ext4 fs on an encrypted hd.
I have configured Master and Slave DNS server in Red Hat Linux 4 Enterprise. I want to know about what is a Caching Nameserver and in which situation we use it? If there is a master and slave DNS server we can use cache name server as well ?
I found in the following mail thread: [URL] That we can add, netgroup: caching compat and also the netgroup caching rules stanza in nscd.conf But the mail thread is for FreeBSD Unfortunately, I can't get any reference for RHEL 5.x I need to do exactly same stuff. The following line in my nscd.conf was enuf to leave me disheartened, :'(# Currently supported cache names (services): passwd, group,hosts The nsswitch.conf on my system works only with following: netgroup: nis Does neone knows if netgroup caching is supported by nscd.conf.
My router is crap. If I use DHCP it sets all the computers DNS to itself and all DNS requests get cached in the router. It even starts to loose some DNS request if to many are made at once. On my windows PCs this isnt a problem I just set DNS to google's public DNS servers (8.8.8.8 & 8.8.4.4) and bypass my router and ISP alltogether but when i go to pref>network_connections i have to either set DHCP or manual, there is no option to set DHCP with custom DNS. I'm sure there must be a way to do this in terminal, can someone tell me how? I'm using ubuntu 10.10.
I was wondering if a standard Debian 8 system with Gnome desktop does any kind of local dns caching, and if so, what the command is for clearing it. (Assuming I haven't purposely installed any DNS server software.)
I found multiple posts on the Web about unix DNS caching, but with widely different answers across distributions and across time.
I am looking into creating a web caching server for myself using fedora 10. I believe I need to use squid for this but it seems to have a lot of features. Basically, all I want for now is to be able to cache web pages that I and my network users use the most, increasing access time and lowering the load on my internet connection. Can squid do this and can someone point in the right direction on an article on how to configure such a thing?
ran an internet cafe and last week my windows server got fried because of power surge. Now i got Fedora 14 running on another PC and i want to set it up as a full caching proxy server, so other computers can connect through it to the internet. I have 2 network cards inside.I'm really new to Linux and now learning my way around. I managed to install squid but don't know how to configure it to suit the purpose above
I want to configure DNS Server on Fedora14. So I install caching-nameserver cause any template files.I can't install caching-nameserver on my Fedora14 by this command: [but i can do it on Fedora5]
Basically, i have a clustered filesystem using GlusterFS. This is ultimately going to host a very large number of files.
It is mainly used as a storage destination for backups, and historical copies of files.
Remote servers sync using unison every few minutes. A local script will run over the whole filesystem once per hour looking for new files/folders, and files that have been updated based on their timestamp.
99% of filesystem access is browsing the directory structure, listing directory contents and checking the modification times of files. Access to the actual content of a file is minimal. Only a tiny fraction of the filesystem is actually modified from hour to hour.
GlusterFS alone is quite slow when browsing the directory structure. (ie. "ls -Rl /data") The speed of things for actually transferring file content is sufficient for my requirements.
What I need is to vastly improve performance when running operations such as "ls -Rl /data". (/data is the mount point)
I believe the best way to do this is to implement caching. The cache options within GlusterFS are simply not sufficient here.
My first thought was to re-export the GlusterFS mount with NFS, and then mount the NFS share and set the cache on the client to a very long expiry. (like 86400 = 24 hours) It is my understanding that any change made to a file using the mount point will invalidate the cache entry for that file. (it is only mounted in one place, so no changes possible at the back end.)
I did this using the kernel based NFS server, but ran into major problems with the "Stale NFS" errors which from reading is due to a problem related to FUSE that doesnt sound like its going to be fixed soon. Aside from the Stale errors, this did provide a suitable boost in performance.
I tried the beta of GlusterFS that has the integrated NFS server (so presumably, no FUSE) but I could not get this to compile properly on our servers.
Finally, I tried using the Gluster patched version of unfs3 that uses boost to talk to Gluster instead of FUSE. Now this works, but for some reason the NFS client cache doesnt seem to cache anymore.
One last thing that I was looking at is the possibility of running a simple cache layer in front of either GlusterFS or NFS. I believe Cache-FS is the tool for the job but I have been unable to get that to work - I believe it is disabled in my kernel or something. (mount command says cachefs is unknown)
I am running Ubuntu 8.04 on most servers, but have upgraded one to 10.04 to try and get around Kernel limitations. My servers are all 32 bit (I know, not recommended for GlusterFS) and its very difficult for me to change this. (its a live system)
I quite simply need to add a cache for the directory structure information, and then maybe export this with NFS so that it can be mounted on a *single* server. (the cache can be on the server where it is mounted if required, but due to the large size of the cache - it may be better to have a server dedicated for the cache)
I am running GlusterFS 3.0.5 in a replicate/distribute manner.
There is this server that is running a lot of websites and runs varnish for caching for performance boosting. But I want to somehow remove certain URLs from caching which change frequently. But I do not want to remove complete domains from caching but certain URLs from the websites. Is there any way to remove those pages from caching?
Is there anything like a persistent caching proxy available in linux for me to configure, ie not public? (persistent meaning the cache remains in hard disk between reboots) Is it possible that it NEVER looks for any update to a page that is available in the cache?
I want to compile a Source to RPM with no Cache option. I have tried it with enable-storeio=null option in squid.spec file. After installing the compiled RPM, seen that it is making squid spooling directories and caching the objects, working fine.
I'm running into a little trouble trying to configure bind as a caching dns server on centos 5.6. for debugging purposes I've got iptables and selinux turned off, but I can't get see the dns service on my local network. on my server itself I can run nmap against it and see that port 53 is open, but if I try it from another computer on my network the port is closed.
I'm experimenting with Squid [ job task =( ] Now Squid is configured to cache deb, zip, tbz2, gz, flv, etc. but I dont know how skip caching xml and other file types.
I have a python script that I use to create Debian packages automatically for me. When running this under Ubuntu, it only requires the passphrase to be entered once to sign the changes and dsc files. However when running under Debian it is required to enter it every time. The script is using the debuild command to do the actual package building.
I just upgraded to Fedora 13, with emacs 23.1. Now when I edit a .gpg (encrypted) file, emacs doesn't cache the passphrase, so when I save the file emacs demands that I repeat the passphrase twice.Previously, the following line in .emacs made it cache the passphrase:
Code:
(setq epa-file-cache-passphrase-for-symmetric-encryption t) This is supposed to work, according to the documentation [URL], but in Fedora 13 emacs it seems to have stopped working.
My upstream DNS server is a bit slow, so I've installed the dnsmasq cacher locally. I have the service starting on runlevels 2, 3, and 5. But I can tell by Firefox's behavior that dnsmasq does not work upon boot. Firefox lets its own DNS cache expire after 60 seconds. When I do my second Google search five minutes after my first, the second DNS lookup for www.google.com is just as slow as the first.If I manually restart the dnsmasq service, I get the fast name resolutions I expect.
I run Ubuntu Netbook 10.04 on my EeePC 1005HA. I'm going to get a SSD for it eventually, but I can't afford one right now so it's running from a 200GB hard disk I scavenged off a dead laptop.
I went in power management and set the option that says "spin down hard drives whenever possible", but this accomplished a whole lot of nothing - whenever the computer is on, the drive's spinning. I ran hdparm -y and the drive clicked off, and then promptly spun back up after a few seconds. Iotop shows occasional tiny bursts of activity from "jdb2/sda1-8", which I don't really know how to interpret, but I don't have anything weird installed so I'm assuming this is normal system operation.
Now, what I need is some sort of application, utility, command - anything - that forces the computer to keep all filesystem changes in RAM with the drive shut down; every five/ten minutes or so (this would hopefully be configurable) it spins up the drive, dumps the filesystem changes to it, and spins it down again.
I realize this presents data loss risks related to crashing and poweroffs when the cache hasn't been dumped to disk, but I'm willing to risk it as Linux never really crashes at all, and since it's a netbook power failures won't cause unexpected shutdowns.