Server :: Generating Apache Log Reports With Specific Format?
Jul 1, 2011
I'm trying to find some tool on generating reports based on apache access_log files (of Common format). I found some of them (awstats, lire/logreport, weblog expert, apache logs viewer, etc..) but they generate some global and general report about the log file. Also some perl script I found they just show the Top X number of different patterns. My request is how can I generate some similar report with this output:
IP-s | Total nr. of connections | Number of pages visited | Total time of connection
So basically this is a list with every IP on the log and the respective numbers (connection/pages/time) associated.
SARG seems ok but it is not generating any reports.... "Now generating Sarg report from Squid log file /var/log/squid/access.log squid and all rotated versions .... Sarg finished, but no report was generated. See the output above for details. There is also no view generated reports too.
I am having squid proxy server running on OpenSuSe 10.2 I noticed when I generate report it just shows me last date log file.Although /var/log/squid contains logs of all previous dates.I really cant remember which file to modify so that I can see all dates reports in html when I use following command Quote: cat access.log | /home/user/squint-0.3.10/squint.pl /home/user/report<date>
I installed Apache server with Debian 5.0.2 Lenny. I am trying to write a script which would analysis web log files. I found the log files on /var/log/apache2. There is an access log file, `access.log`. My question is what configuration file determines the location and the name of the access log file. How can I change them? I used CustomLog in /etc/apache2/apache2.conf like below.LogFormat ": %h %l %u %t "%r" %>s %b" common CustomLog /home/test/my_log_file common Apache2 generated /home/test/my_log_file. But no logs were written in the file even after I run `/etc/init.d/apache2 restart`. Ichanged the log file location. It still didn't work. However, Apache2 still wrote logs in the file `/var/log/apache2/access.log`
Code: # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. #Listen 12.34.56.78:80 Listen 80
And when I try to start the server, I get the following
Code: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80. I did have an Apache web server up and running about 6 or 7 years ago - but seem to have lost everything
I'm trying to setup an Apache server on my computer which will allow browsing of files in a specific directory and subdirectories, without needing any sort of authentication.
I've got the Apache2 server up and running through yast, and everything works fine as long as I try to point it to the /www/htdocs folder. However, I want to point it at another folder, which is on another partition. This partition is formatted as NTFS, if that matters at all (here's some background on some permissions issues I had with the NTFS partitions recently).
When I change the "Directory" setting in the Yast http server configuration utility to the directory on the NTFS partition I wish to use, attempting to access the server results in the following error:
Code: Access Forbidden: You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster.
Error 403 192.168.1.100 Mon Jun 13 23:43:29 2011 Apache/2.2.17 (Linux/SUSE)
I'm about to create a CSR and was reading this page in the Ubuntu docs: [URL] A couple of things:
* There's no date on the article. The documentation needs DATES because this information gets out of date! Check MySQL docs, for instance -- they are organized by version. * The instructions for generating a cert only specify 2048 bits. I believe that's kind of out of date? The verisign site has big red warnings saying you need 2048 if you want your cert to last past 2013 -- and that article is 4 years old! * The instructions are confusing when discussing the passphrase. We enter a passphrase only to remove it immediately. We need some clarity here. Why do this?
How to understand the current best practices for generating an HTTPS cert for apache and/or mail access?
I got the following task from my boss. I have to find out if there is some alternative tool for create reports from Squid except SARG. Now, we use SARG, but my boss told to me, that the main problem of SARG is, that SARG generate huge amount files, which cause problems during migration our servers. He told to me the following condition for change of current tool (SARG):
* standard package of Debian * generate less amount of files, optimal is to save reports to the database
So I would like to ask you if you know about some tool (I can not find some by google)... and the best would be if you told to me some practical experiences.
I have installed the package mailutils by following command: sudo apt-get install mailutils Now I want to send mail using the following format: $mail <username>@gmail.com I am doing the normal procedures but the mail is not sent.
The above is the machines actual FQDN. Now because I also use it as a web server to access my website and webmail, I have a pointer record with my domain registrar to also forward all [URL] to the same IP as [URL]. when I generate a SSL self signed certificate for my server. Do I generate one for [URL] or [URL]?
I'm trying to configure our mail server to block email from a specific sender reaching a specific recipient. In other words, if one of our employees is getting harassed by a 'stalker', how would one go about blocking, at the MTA (Sendmail) level, a specific sender email address from reaching a particular users inbox? We do not want to capture the email - simply block it before it consumes server resources.The Sendmail server (MTA) is a front end to our Exchange server so no user accounts exist on the Linux server. We simply use it as a SPAM and Virus scanner then forward clean email to the Exchange server.
while running rsync system is generating .7ffulr extension file ie i am copying x file form server to backup server.while rsync it copy original x file and also generate x.7ffulr file along with it..
I am trying to monitor server throughput with a centralized ntop instance running as NetFlow aggregator and various NetFlow probes (nProbe, fprobe) on the Servers.ntop shows the probe as NIC correctly and receives the data, but it only shows one Host under "Hosts", which is the server itself. I expected to see a host list just like it is shown when running ntop locally (i.e. the server ntop runs on and every host he contacted separately). This happens both when using nProbe and fprobe. Have I misunderstood the concept of NetFlow Aggregation or am I using ntop/nprobe wrong?
I am trying to get apache to listen to specific IP address and I have read up of the listen command (http://httpd.apache.org/docs/2.0/bind.html) I can get virtual sites to work but not the apache it self.
I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking.
first, system info: monitored server: CentOS 5.3 2.6.18-128.1.1.el5
I configure squid to work with squidGuard , and all thing work properly , but there is problemfirst look to this squidGuard.confdhhome /usr/local/squidGuard/dblogdir /usr/local/squidGuard/log
I am trying to figure this out and it seems I can't So, I have a server which hosts various domains, each domain with multiple subdomains. All websites are set up with "VirtualHost" and they all work properly.The problem I'm having is that if I enter any subdomain of the main domain, I can still reach the webpage.Is there some way of telling apache to DROP / display a forbidden message for all subdomains which are not listed in the VirtualHosts?
I'm trying to run a CGI file with Apache2, but when I navigate to it, I just get the file in it's plain text format and not actually parsing the file. What do I need to configure?
I've tried this Code: <Directory /var/www/> AddHandler cgi-script *.cgi Options +ExecCGI </Directory> And I've tried this Code: <Directory /var/www/> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny [Code]...
I wanted to log some messages on Apache. So I added in VirtualHost definition
Code: CustomLog /var/log/apache2/site-resp_log resp LogFormat "%{X-Forwarded-For} %D %t %T %v %O %b %A %B" resp and restarted apache2. I got following error
Code: * Restarting web server apache2 Syntax error on line 33 of /etc/apache2/sites-enabled/site.com: Unrecognized LogFormat directive % [fail] root@server:/var/log/apache2# vi /etc/apache2/sites-available/sites.com
Here is a page I referred to. I am not able to understand the syntax error.
I have a server running CentOS 5.3 (Final) Kernel version is:
2.6.18-128.el5 #1 SMP Wed Jan 21 10:44:23 EST 2009 i686 athlon i386 GNU/Linux
The output from df -h is as follows:
Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.5G 3.7G 5.4G 41% / /dev/sda5 4.6G 456M 3.9G 11% /var
[code]....
As you can see, /home claims to be 100% full - but yet there is actually 18Gb free? I seem to recall this could be something to do with running out of inode space?
I have squid running perfectly and I added MySQL Squid Access Report 2.1.4 and the reports works just fine. The problem its when I add a dansguardian content filter, from that moment the only IP address that appears on the report its the box itself (I have all running on the same box).
IPtables forward requests to port 8080 Dansguardian listening on port 8080 forwards to squid on port 3128 Squid on port 3128 to internet (Here I review the logs with MySar).
I know it is because the actual http request for Squid came from Dansguardian's IP address (its the job of the proxy). how to have the real IP address on the reports.
I am upgrading my server and I have a lot of sites. Since I cannot take my server down for a few days, maybe a week until I manage to migrate all the sites to the new machine, I figured I could migrate them one by one. After migrating one, I would somehow tunnel the requests of that name virtual host to my internal machine. When everything is migrated, I would then switch the machines, update ip's and stuff and everything will work just fine.
However I cannot seem to find a way to do this tunneling. is this at all possible? If not, what alternatives do I have?
I would like to ask if there's a program that can archive all emails from my employees to a certain server and can generate reports. specifically all types of emails incoming and outgoing. My employees are aware of my policy due to many confidential files within our office.
1. Webserver (Centos 5.5) 2. Mail server (Centos 5.5)
We have configured autossh successfully to create/manage the ssh tunnel into mail server in order to dump all emails to localhost port.
To auto start autossh in boot time we have included following into /etc/rc.d/rc.local,
Quote:
So whenever our web application wants to send out emails it dump all emails to localhost:33465 port, easy piecy, all are working great
Now we have a requirement that logwatch reports should get delivered via the same ssh tunnel rather than installing postfix and configuring as a relay.
I am installing Big Brother on a CentOS 5.2 running the default Apache 2.2.3. When I try to access any web page I get the following error: Forbidden You don't have permission to access /bb/ on this server. Apache/2.2.3 (CentOS) Server at fmsubbnix Port 80 So far I have:
1) Set the Directory options to FollowSymLinks 2) Verified all directory and file permissions are at 755 3) Set permissions temporarily to 777 and received same error so I am assuming the issue is in a config file somewhere 4) in hhtpd.conf verified <Files ~ "^.ht"> is correct 5) verified the "default" directory is correct (/var/www/html)
I have read and tried several ideas in posts listed on the web but to no avail and am at a loss as to what to look for next..
I'm running a linux cloud server with the following config 1.2ghz Processor allocation 752MB Ram
The site loads slow and clicking a link almost freezes the page for a second. Also, the page loads could be much faster. We've been running mysqltuner and have pretty much optimized all slow queries. Is there anything we can do to fine tune the server for faster and more responsive?