I have CentOS 5. From sometime logrotate is not working and maillog for example is very big. It is the same for all logfiles. I run "logrotate -d -f /etc/logrotate.conf" but nothing happened. Cron seams to work as I see it with ps -ef |grep cron
Is the logrotate.conf settings global/apply to what is in logrotate.d/? I have olddir /var/log/old_logs in logrotate.conf but logrotate is not placing old rsyslogs in /var/log/old_logs for logrotate.d/rsyslog
I need to logrotate logs in directories in /var/log/httpd/.
There are 4 directories in /var/log/httpd/... these directories are /var/log/httpd/access/ /var/log/httpd/debug/ /var/log/httpd/error/ /var/log/httpd/required/
Each of the access, required, error and debug directories have around 20 to 30 access log files of different locations for example:mumbai-access.log, pune-access.log etc..same is the case for 'error' dir 'required' dir and 'debug' dir in /var/log/httpd/
I need to clean up the logfiles in all the 4 directories access, error, debug and required...
I have made a custom logrotate file as follows:
Is the above config correct?
Am I missing something? Will this logrotate the files in /var/log/httpd/access, /var/log/httpd/error, /var/log/httpd/required and /var/log/httpd/error ?
Do i need to include following line in postrotate " /bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true" ?
Recently I noticed that on my Centos 5.4 system, yum no longer works and is giving segmentation faults. I can run "yum --help" and it works, but if I try to run something like "yum upgrade php" it will fault. I also noticed that other things are seg faulting as well, like /usr/sbin/logrotate and /usr/bin/certwatch.
I am guessing there is some sort of common library that needs fixing, but I have no idea what. I've read other posts about the yum segmentation fault and have tried various steps provided but so far no luck in getting it to work again. It used to work, and I rarely change this system so I'm not sure what could have caused it.
I am trying to configure logrotate on APP/DB servers.As per my backup policy,logs will compress in daily basis and and will be moved to a Central storage device.
My tomcat generate several application logs with date extension as well as .log extension.For eg app.log,app.log.2010-10-23-14,catalina.out,catalina.2010-10-25.log etc.
Currently my tomcat logrotation /etc/logrote.d/ #cat /etc/logroate.d/tomcat/ /usr/local/tomcat/logs/*log {
[code]....
But its rotating logs only with .log extension..ie app.log.2010-10-23-14 (with date extension) is not rotating.If i put "*" instead of "*log",its rotating all files including rotated files. How can i rotate files which is having date extension.Also i dont want to keep rotated logs for more than 3 days.
I am newer to Linux ( using Ubuntu 10.04) : I have noticed that during replacement of a file , no date and size of the new and old files are shown in the dialogue box so how to show that ( like the one in windows)
I know that it is easy question , but i really don't know how to do that , by the way I have checked folder preferences and system --> preferences but i did not find something for that
whenever the log file (test.log) exceeds 100M a new file will be created with the file name as test.'date'.'gz'(new file is created with a current date and in a compressed format of gz) and also with permission mentioned above). I really dont know what is the role of rotate( will this be carried on only for next 4times i mean upto 400MB; (4times*file reaches 100MB)? and also what could be the purpose of postrotate?
i'd like to have logrotate compress the logs that are older than 3 days. Is this possible with logrotate, or do i just schedule a cron job to bzip everything under the folder older than 3 days?
My apache2 logs aren't being rotated, I have 1 log nearing 100MB in size.
Error shown below when a logrotate happens on apache2 logs:
Code: error: other_vhosts_access.log:5381 unknown option 'jack' -- ignoring line error: other_vhosts_access.log:5381 unexpected text "jack" is a sub-domain.
We started hosting some very large content on our site, and the usage patterns in cacti have revealed that the HTTP sessions through our load-balancers drop off dramatically right at midnight.
The logrotate process runs right at midnight, and issues a reload command through the service tool (CentOS 5.5): Code: $ cat /etc/logrotate.d/httpd /data/websites/logs/*_log /var/log/httpd/*log { missingok daily dateext compress rotate 7 sharedscripts postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }
Looking at the init script reveals that the reload section is suppose to trigger a HUP of the httpd process: Code: reload() { echo -n $"Reloading $prog: " if ! LANG=$HTTPD_LANG $httpd $OPTIONS -t >&/dev/null; then RETVAL=$? echo $"not reloading due to configuration syntax error" failure $"not reloading $httpd due to configuration syntax error" else killproc -p ${pidfile} $httpd -HUP RETVAL=$? fi echo }
In which, Apache should reload it's configuration and start the new logfile without breaking current sessions. However that clearly isn't what is going on. I'm tempted to edit the logrotate script to trigger a HUP directly by cat'ing the PID file directly. Is this normal behavior for Apache when signaled with a HUP?
Before I start writing my own file maintenance script, maybe one such program/scripts already exist somewhere. Am looking for a file maintenance script/application that is configurable that I can use to process files under certain criteria, for example, removing files that are x-number of days old, gzip'ping files if they are core dumped files, removing files if they are zero-sized files etc. Am not sure logrotate is the solution that am looking for.
I am having a few situations to which I do not see any thing in du man pages.Quote:1) I want to see files in a sub directory which are larger than a particular size only.2) I use du -sh > du_output.txt I see the output as described for option -s and -h how ever what I am more interested is if the output comes in a format which is say for example
I need a command that will check the size (visual size, not file size) of every image in a folder and its sub folders, and make a copy (or even better, a hard link) of that file in a second directory if the image is larger than 1920x1080 pixels (in both dimensions, not just total area). Also, lots of these file names have spaces, so the command needs to be space-tolerant
I'm guessing I would need to use one of the imagemagick commands and find, but I'm not sure where to start. I'm still reading man pages, but I thought someone here might save me some time.
i need help in this issue how to find files with unusual size and with unusual names of EX : just dots, names ending with space(s),names containing shell wildcard characters, names containing non-ASCII (control) characters
I made an account under freeshell.org and it has been very satisfactory so far. I recommend everyone getting an account under freeshell.org. But anyways, how do I find files over, for example, 500 KB, in the entire, my shell account?
We have a problem where there is not enough space in our /tmp partition. We are trying for fix our mysql database, and keep running in to the space issue... the error we are getting says:
myisamchk: Disk is full writing '/tmp/STGL3SGd' (Errcode: 28). Waiting for someone to free space... (Expect up to 60 secs delay for server to continue after freeing disk space)
Our /tmp partition is current set at 485M, but it is not large enough to handle the database fix...
Does anyone know of a work around - perhaps to assign different directory for the temp files?
I have centos 5 virtual server running on ESXi (vsphere 4.1).i have to increase disk space. I increased the size of the virtual machine in Vcenter.But i allready have 4 primary partitions. when i run fdisk /dev/sda, i get this :
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 3916 31350847+ 8e Linux LVM /dev/sda3 3917 6527 20972857+ 83 Linux /dev/sda4 6528 13054 52428127+ 83 Linux
Is there any utility that can provide a list of all files, and both the file size and md5 hash value. Preferably also including other hash values.I've got 1.5 TB of files to go through, and delete duplicates..Neither fdupes or fslint are up to the task --- both claim files to be duplicates, when they definately are not. (Movies, and OOo documents are not identical, even if one is the script for the other, which in this case is not the case. Both fdupes and fslint claimed that those two files were identical. (And yes, I did look at them.
Burned onto CD, some mp3 music...used windows.... Now when copying from the CD to computer using Ubuntu 9.1, ...althouth there is no problem with the sound and they play great. I noticed that the file format is now .wav, furthermore the file size is about 13 times the size. I burned a file no larger than 3-5 MB and now the same file is 60 - 70 MB. Is there an easy way to shrink these files to a more manageable size?
I have a problem to gain my hardisk size after deleting a large amount of files. i have root partition size with 196 GB. actual used size in that partition is about 70GB, with command df -h, the used size 139GB. I have no idea to find out where is the hidden files. this happend after I delete the large amount of files because the hardisk almost full. the file i have deleted is almost 60 GB size.
i manage to delete some files from the system. now i need to recover them.. i know the inode # (through ext3undel) and also the size.Quote:Unfortunately, we cannot automatically obtain the name of a deleted filefrom Unix file systems - since the connection between the iNode (whichholds the MetaData, including the file namee real data is droppedon deletion. However, we can obtain a list of names from the deleted files.How can i use this information to recover the files?Also can i search the text from a partition? (file don't exists). As i need figures