General :: Squid Log Is Wrong Or SARG Bad Configurated?
Oct 6, 2010
i have a Archlinux PC working Squid like proxy server and SARG for read the access log from Squid. The Squid log only show me 192.168.0.0 making http request, i need show the local IP from each PC on my network. I read in other page i need rebuid squid with follow_x_forwarded option, how i can do that? exist other way ti do that?
i have just install Sarg-2.2.3.1.tar.gz but When finished compiled i cannot see sarg.conf (in directory /etc/.. or /etc/httpd/conf.d). May i know where is it.?? I'm not sure compiler which it's work good or not good. Some logs show in here:
I got the following task from my boss. I have to find out if there is some alternative tool for create reports from Squid except SARG. Now, we use SARG, but my boss told to me, that the main problem of SARG is, that SARG generate huge amount files, which cause problems during migration our servers. He told to me the following condition for change of current tool (SARG):
* standard package of Debian * generate less amount of files, optimal is to save reports to the database
So I would like to ask you if you know about some tool (I can not find some by google)... and the best would be if you told to me some practical experiences.
I am using squid to controlling access to the internet all is working fine expect one of the user who is using outside organization portal to connect internet. But whenever he tries to enter in the portal by typing (EXAMPLE)url. Permission denied error from squid occur.
How can i allow this portal in squid. So squid will allow this to access.
SARG seems ok but it is not generating any reports.... "Now generating Sarg report from Squid log file /var/log/squid/access.log squid and all rotated versions .... Sarg finished, but no report was generated. See the output above for details. There is also no view generated reports too.
When I start the SARG, its blow up with segmentation fault. Linux intra.local 2.6.18-53.1.13.el5 #1 SMP Tue Feb 12 13:02:30 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
I already tried to empty all squid cache log and recompile (yum remove and yum install again) the sarg rpm w/o sucess.
I just setup a new squeeze debian 6 server, it will work as proxy server with squid. But I have an issue when i want to install sarg (sarg-report), i cannot find the package thru aptitude/apt-get, however it was present in lenny distribution. Is it a miss in squeeze, is it discontinued or should I add a specific entry in /etc/apt/sources.list? (currently squeeze main, squeeze/updates main)
I am using SARG for squid report analysis. And it is working fine. But I want to know if there is some custom configuration possible to link the SARG image at the top of the page to my custom location instead of its default homepage on sourceforge? For the ease of understanding, I am attaching the page screenshot here. The SARG image I am talking about is the one on the top in blue colour. It is linked to [URL] and I want it to link it to my sarg homepage.
I am getting an error when i generat a report with squid's report generator ( sarg )is there a tool or way that i can find where in the log file the error is, the log file is 61442 lines, and it's gonna take me forever to find the error,
My squid server works fine in fedora 11 system . Is there any web like interface for admins to create,change,modify users of squid and to view their logs.
I would like to ask some help and tutorial for setting up and how to configure squid proxy server in my (Home PC Server). I am a newbie in Linux Centos. I already installed in my system the CentOS 5.5 . Now, I want to configure it as my internet server, all of my 4 system running in Windows including the laptop I want to connect through my CentOS pc with username authentication. I assign all IP address by static. see tthe attachement in my set up. [url] I just want to know what I need to change and add in my squid config file. And how can I configure properly my CentOS with 2 LAN card as internet server.
Im using squid3.0 stable25 in fedora11 RAM512MB, and have 25 users. Problm is it uses full ram after about 30 minutes from booting.. when i upgraded it to 1GB the whole 1GB is using after some time... i think its due to caching of squid, how to disable it?
I've just upgraded from Debian Lenny to Squeeze. It didn't go as smoothly as I had hoped, one of the casualties being the proprietary nvidia driver I had previously installed. So I fixed that using the 'Debian way' rather than NVIDIA's installer, and that is now all fine. So far so good. However, when I boot up, pre starting X, my screen is a much lower resolution than it used to be (640x480, I think). Really chunky and ugly as it runs through the start up scripts. I assume that somewhere early in the startup scripts the nvidia driver is being loaded and set to a low resolution. Can someone advise me where this might be, and what to look for? Am I on the right track? I stress that this is before X is started. Once X has started the screen resolution is as I want.
I'm using squid for proxy server in FC6. I'm also using squidGuard for web-site access restriction. I want to do some exception now for website access. For example, squid user1 with ip 192.168.7.10/32 shoud not access facebook.com while all other squid users with ip 192.168.7.11/32, 192.168.7.9/32 and so on... can access facebook.com since facebook.com is not listed in squidGuard .db files
but it give me error as like: - (This is the output of # squid -k parse) aclParseAclLine: Invalid ACL type 'arp' FATAL: Bungled squid.conf line 1234: acl block arp 00:13:45:d3: 24:e4 squid Cache (Version 2.5.STABLE6) : Terminated abnormally
i have installed squid 2.6 on my centos. i have writen a shell script to ping a network and write to a file. write '1' if network up and '0' if network down. After that,a perl script will read the file and do the redirection.Perl will redirect to a fix URL [URL] if the network down and do nothing when it up. i have put my perl script in squid.conf at url_rewrite_program /my_file_path.
below is my shell script for pinging:
Quote:
#!/bin/bash while [ 1 ] do HOST=143.148.137.134
[code]....
My problem is client browser are not redirect to www.google.com even the network is down. It should go to the fix URL when the user click any URL in network down situation.it just appear cannot resolve host.
Can't seem to access my squid server on port 80. I have port 80 allowed in the conf for this IP. apache is listening to port 80 but only on the 2nd IP. iptables is allowing through port 80 incoming nmap shows no ports open on 80 though:
Code:
Starting Nmap 5.00 ( http://nmap.org ) at 2009-08-20 11:19 BST NSE: Loaded 0 scripts for scanning. Initiating SYN Stealth Scan at 11:19
I have installed squid and dansguardian on my server, I also setup my iptables to forward port 80 communication to port 3128 (squid). I also have remove the comment on /etc/dansguardian/dansguardianf1.conf (line "bannedextensionlist") hoping that my server would block download. But it isn't, it still download file no matter I add in /etc/dansguardian/lists/bannedextensionlist. Oh yeah, I also add this line to my /etc/squid/squid.conf
I recently finished installing Squid 3.1.9 and I think I've done installing correctly with its feature of minor configuration changes. It accepted requests on port 3128 and created and the created some numerical or binary (I guess) files in /usr/local/squid/var/logs, My problem is how can I fully verify if the cache is really storing Internet Files? I've read some forums in the Internet replied to me to try the command: cat /usr/local/squid/var/logs/cache.logSo I did tried it and it gives me this output:
2010/11/11 18:04:49| store_swap_size = 0 2010/11/11 18:04:50| storeLateRelease: released 0 objects 2010/11/11 18:05:41| Squid is already running! Process ID 5458
i have centos os squid 2.6 version,i have to configured squid to restrict some ip to 10 kb upload for that i set request_max_body_max_size but this directive is applicable to all ips but i want to limit uploading for some paricular ips.