Fedora :: Program To Archive Emails From Employees To Certain Server And Generate Reports?
Jul 23, 2010
I would like to ask if there's a program that can archive all emails from my employees to a certain server and can generate reports. specifically all types of emails incoming and outgoing. My employees are aware of my policy due to many confidential files within our office.
I got the following task from my boss. I have to find out if there is some alternative tool for create reports from Squid except SARG. Now, we use SARG, but my boss told to me, that the main problem of SARG is, that SARG generate huge amount files, which cause problems during migration our servers. He told to me the following condition for change of current tool (SARG):
* standard package of Debian * generate less amount of files, optimal is to save reports to the database
So I would like to ask you if you know about some tool (I can not find some by google)... and the best would be if you told to me some practical experiences.
I work for a not for profit research organisation that works in the clinical sector. Currently I create summary reports for each of our studies that are then made available through our website. These reports are generated from data that is stored in a MYSQL database residing on a RHL Enterprise server. The data analysis however is performed using Microsoft Excel and then I use excel to create the final report pages, which are finally PDF'd.
This approach has served me well however I am currently working to port the data analysis to R and have the analysis performed on the server side. Now I have to consider how to generate reports. Here is my question. Could anyone recommend software that I could use to prepare and review reports generated directly from the data residing in the MYSQL database? I have no experience with this area as my background is in microbiology.
All operation result in a seperate window displaying the progress of the compression but with an error.
Error 127, cannot execute requested operation.
Then when the progress bar reaches full it resets and then resets again continuing the same loop with it becoming slower with every repetition. There is no information posted in the report log of the window. Except that the task has started.
I'm trying to simply archive a file and password protect it. This shouldent be such a difficult task.
P7zip also gives me its own set of errors when archiving.
Please what will it take me to write a perl full functioning program to filter emails for specific rules? Will that be possible? The actual thing am trying to get is to write a perl program and attach to a mail server so that, when the mails come in, the perl script get call and then the perl program will let another external program that is not on the server run and check or filter the mails.
Im not sure iam posting in the right place forgive me if i posted incorrectly. I have just setup my first ever Fedora computer/server. Ive got clients to log into it via network booting using LTSP. Im really pleased but I have a few questions I need to ask. As I want to use this for a for work i.e. Im setting up a small business.
1. I need my clientcomps/employees to use a certain file on the desktop, to take peoples orders, another file to search and another file to edit etc. Iam intending on using a php web file that will save info to a SQL database can this be done? and how?
If I save a web document when iam admin on the desktop, my other users dont see it.
2. I want to restrict certain websites during work hours like Facebook etc (haha i want them to have access in their breaks though).
I want to write a program in C which will generate a maze randomly and find the solution for it ..
The idea behind is in [url]
How the 16 bit integer is stored in a variable..Earlier I wrote a program on trees and displayed it using dotty.. Is there any such tool to display a maze..I am using ubuntu 10.04.
Normally I just use the backup feature in evolution but how would I go about saving my emails if I wanted to import them into another email program?I am going to install Mint 9 KDE on a separate partition and I have found evolution does not play nicely in KDE so was hoping there was a way of importing my evolution mail into Mints KDE email program.
I am the System Admin for a BPO here in my Hometown. My network has systems in windows. Now I need to monitor what they are doing from my system. Is there any s/w available for that? I searched in Internet and found this: LAN Employee Monitor [URL] but its a windows S/W. Any suggestions or equivalents in Linux? Specifically for Debian 5.0?
I am trying to monitor server throughput with a centralized ntop instance running as NetFlow aggregator and various NetFlow probes (nProbe, fprobe) on the Servers.ntop shows the probe as NIC correctly and receives the data, but it only shows one Host under "Hosts", which is the server itself. I expected to see a host list just like it is shown when running ntop locally (i.e. the server ntop runs on and every host he contacted separately). This happens both when using nProbe and fprobe. Have I misunderstood the concept of NetFlow Aggregation or am I using ntop/nprobe wrong?
I'd like to ask about archive mounter feature, can I mount zip file with read write mode? can gvfsd-archive do that?, or I must use fuse-zip to mount it? If I must use fuse-zip, how I wrap it so I can use it via nautilus or via gvfs-fuse-daemon
SARG seems ok but it is not generating any reports.... "Now generating Sarg report from Squid log file /var/log/squid/access.log squid and all rotated versions .... Sarg finished, but no report was generated. See the output above for details. There is also no view generated reports too.
I have a server running CentOS 5.3 (Final) Kernel version is:
2.6.18-128.el5 #1 SMP Wed Jan 21 10:44:23 EST 2009 i686 athlon i386 GNU/Linux
The output from df -h is as follows:
Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.5G 3.7G 5.4G 41% / /dev/sda5 4.6G 456M 3.9G 11% /var
[code]....
As you can see, /home claims to be 100% full - but yet there is actually 18Gb free? I seem to recall this could be something to do with running out of inode space?
I have squid running perfectly and I added MySQL Squid Access Report 2.1.4 and the reports works just fine. The problem its when I add a dansguardian content filter, from that moment the only IP address that appears on the report its the box itself (I have all running on the same box).
IPtables forward requests to port 8080 Dansguardian listening on port 8080 forwards to squid on port 3128 Squid on port 3128 to internet (Here I review the logs with MySar).
I know it is because the actual http request for Squid came from Dansguardian's IP address (its the job of the proxy). how to have the real IP address on the reports.
I'm trying to find some tool on generating reports based on apache access_log files (of Common format). I found some of them (awstats, lire/logreport, weblog expert, apache logs viewer, etc..) but they generate some global and general report about the log file. Also some perl script I found they just show the Top X number of different patterns. My request is how can I generate some similar report with this output:
IP-s | Total nr. of connections | Number of pages visited | Total time of connection
So basically this is a list with every IP on the log and the respective numbers (connection/pages/time) associated.
I am having squid proxy server running on OpenSuSe 10.2 I noticed when I generate report it just shows me last date log file.Although /var/log/squid contains logs of all previous dates.I really cant remember which file to modify so that I can see all dates reports in html when I use following command Quote: cat access.log | /home/user/squint-0.3.10/squint.pl /home/user/report<date>
Could it be the IMAP file is corrupt?I have set up mail server on Centos to receive via dovecot.One of my user accounts (A single account out of a hundred)cannot receive their mails.
I am newbie to postfix. I added a new domain to my postfix server in the main.cf under mydestinations variable and the relay_domains file. Also, added this domain to my backend exchange server. When I send a test message from the new domain, my messages from that domain appear to be stuck in the "queue active". What does it mean when you are stuck in this queue? Does this mean that my backend email server (exchange 2003) isn't allowing messages from this new domain OR that the POSTFIX server still needs configuring.
1. Webserver (Centos 5.5) 2. Mail server (Centos 5.5)
We have configured autossh successfully to create/manage the ssh tunnel into mail server in order to dump all emails to localhost port.
To auto start autossh in boot time we have included following into /etc/rc.d/rc.local,
Quote:
So whenever our web application wants to send out emails it dump all emails to localhost:33465 port, easy piecy, all are working great
Now we have a requirement that logwatch reports should get delivered via the same ssh tunnel rather than installing postfix and configuring as a relay.
i have a server & there is a website which generate alot of core files, these core files takeup alot of space by time, so i decided to stop it, in WHM the core dump option is disabled, & i used the following command
Code: ulimit -c 0 but nothing stoped generating core files, i also used the following method http://www.cyberciti.biz/faq/linux-disable-core-dumps/ also this method doesnt stop generating of core dump files, my question is : how i can permenantly stop generating core files for all users or for specific user
I need to generate graph for Huawei AR4620 router . As I don't know the MIB for CPU USAGE I cannot create graph for CPU USAGE of Huawei Ar4620 Router. But I can create graph for Interface Traffic for Huawei AR4620 Router.
I'm using RHEL 5.4 and trying to use the system-config-kickstart to generate a ks.cfg file with all the settings already appeneded. After running the "system-config-kickstart --generate ks.cfg" command, the file gets created but it's missing the firewall configuration, partition information and so on.
How can these settings also be generated with the system-config-kickstart?
I love to submit all bugs as I know the importance that this can play for further development. When I click on Bugzilla it allows me to write information etc and I even sign in with my forum id (I know it is not necessarily correct) but I didn't know what else to do and thought it would work. When I sign in it does not reject me until the very final step of the bu reporting process.My question to anyone is how can I get my bug reports to be accepted or how can I sign in to this area of Fedora if required?
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.