Server :: Get File Modification Times Measured To Less Than A Second?
Mar 17, 2010Is it possible to get file modification times measured to less than a second?milisecond, nanosecond, 10th of a second.
View 3 RepliesIs it possible to get file modification times measured to less than a second?milisecond, nanosecond, 10th of a second.
View 3 RepliesIt seems to be simple one, but couldn't figure out exactly. Say I copy a file preserving its original modification time using the command
Code:
cp -p file1 file2
Now later, I want to know when file2 was copied... How do I find it ?
The tail of my log is the following:
BUT:
So, how is it possible that some process writes to the file but file time modification still remains untouched?
I don't think there is a way of doing this with date or clock commands. But maybe they are writing to some file and I can take a look at the file's modification time. dmesg and /var/log/messages show nothing relevant.
View 1 Replies View RelatedHow can I generate a list of files in a directory [e.g.: "/mnt/hdd/PUB/"] ordered by the files modification time? [in descending order, the oldest modified file is at the lists end]
ls -A -lRt would be great:[URL]
But if a file is changed in a directory it lists the full directory...so the pastebined link isn't good [i don't want a list ordered by "directories", I need a "per file" ordered list]
OS: openwrt..[no perl -> not enough space for it :( + no "stat", or "file" command]
I need this script but I don't know how to do it I have one folder with several folders inside.On each folder a have one MKV or AVI file inside...What I need is a script to change the "modification date" of each folder to the "modification date" of each MKV or AVI that the folder has inside.
View 16 Replies View RelatedI'm running postfix with virtual domains and want to modify the delivery path. Right now, I have one path for each user that's found with a database lookup.Before mail hits Postfix, it will have an x-spam-header: yes/no/uncertain field. When mail with x-spam-header: yes the lookup for the path would return /var/mail/domain/username/.Inbox/.spam.
What I think I'd like to do is parse the x-spam-header value in postfix, populate a variable, then use the variable to modify the path lookup in the database. header_checks has a FILTER option, but that's just beyond my skillset at the moment.Or, maybe I'm better off modifying the path with a procmail recipe? Currently, my mailbox_command = procmail -a "$EXTENSTION"
im willing to upgrade to 10.04, but before i would like to have installed the regular upgrades. They make a total of 100megabytes, but my problem is that the download speed is soo slow, its like 2000 bits per second... amazing. I don't where the problem is. I have broadband, and internet works just great for the whole system and applications.
Sorry for my english in case you didn't understand something!
I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card.While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not. In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc?nd am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
View 1 Replies View RelatedI have a question regarding packet drops. I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card. While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not.
In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc? And am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
My systemd-udev-settle.service is failing for some reason.
systemctl status systemd-udev-settle.service -a output
Code: Select all● systemd-udev-settle.service - udev Wait for Complete Device Initialization
Loaded: loaded (/lib/systemd/system/systemd-udev-settle.service; static)
[ode]...
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
ls in its man page mentions several times, e.g., ctime. What are the times ls has something to do with?
View 5 Replies View RelatedI'm looking to find out how many times a file has been displayed on the web. I've setup Google Analytics link tracking but I'm trying to figure out how many times BEFORE Google analytics was setup people clicked and viewed a pdf file.
Is there a script that I could run through the log files to find out how many times it has been served?
Is it possible to make an AWK script that goes over the file multiple times, every time doing something different to it?
View 19 Replies View Relatedthis seems to be a strange question, i know. I've a database sqlite file called "a.db".I need to copy it 20 times (1 time for each letter of the alphabet) to have : b.db, c.db , d.db.
View 1 Replies View RelatedI have a server with 8GIG of ram and a quad-core q9550 CPU and 500GIG of hard-disk space, and 100Mbps net connection.I have separated it into 3 parts of 2 Gigs of ram each, by virtualization, so there are 3 VPSes. The problem is that the one of the VPSes (created) with centos and WHM/cpanel would go down very often, as many as 60 times per day!I asked one of my friends to have a look at it, and he told me that Tomcat was the reason.Things got a little better (ask me what I changed about Tomcat?!) and after that the server would go down only about 20 to 30 times per day.I asked the friend again, and he told me that the CPanel integrated antivirus would cause the high load periodically.(Ask me again, what did I change?) And after that the server would go down only about 4 to 10 times per day.Now, as far as I can tell, there is nothing more that would/should cause the server to go down.
About 30 minutes ago, before writing this post, the hosted sites were unavailable again, for about 30 seconds.I checked the load through ssh and I saw that the load is 3, so I went to the main server and checked the VPS ram and noticed that only 10% of the cpu is used and only 307Mb of available RAM is used.
Everything shows that the server should be functioning OK, but it still seems to get unavailable far too often.
I used the top command and noticed that there is nothing using the cpu and the ram and all of them are 0.I used the mysqladmin proc command and again nothing noticeably weird seemed obvious at this time. I also checked the other VPSes and they were also in relatively low-resource-consumption states(I don't understand this part):Also sometimes I use the top command when the load goes high I see that all of the users, even the non visitors users site use cpu seems forexample a refreshing things or updating thing updates all of the accounts forexample mysql or so Please help me to figure out what other things might be causing these service/server high loads and unavailability.
As we know that squid has ability to allow/deny in TIME base.but i want to know that How to Block(deny) perticuler website in certain times of the day in particuler IP or IP Block suppose i want to block(deny) xyz.com website at every day at 10:00 to 13:00 time in 192.168.81.0/24 ip block.
View 7 Replies View RelatedStarting File Manager opens up a million and one times in FC12
Is there a way to stop it?
I have squid on my RHEL5 server and a no of windows clients ,on clients some sites opened without any error but some sites whilw opening says unable to resolve hostname ,why this kind of problem ?This may be DNS problem ,but it should happen for all address not some .
View 2 Replies View RelatedMy 5.5 server has crashed on my multiple times over the past few weeks. I have 2 terabyte drives mirrored.It automatically reboots and seems to run fine after rebooting.I've attached a copy of the /var/log/message to this post.I was unable to paste file.I'm not if a bad disk is causing the problem or not.
View 4 Replies View RelatedI've done some searching around but can't find anything conclusive on this error. The tech at my remote site restarted the 9.04 server(not sure if it was accidental or planned) and when it started the boot process, an error like the following shows up...
"The display server has restarted 6 times in the past 90 seconds. This indicates that something bad is happening."
I have a scenario where I want to monitor at disk performance (cpu and memory also if possible) on a RHEL 5 server functioning as a NAS. I have several machines that backup content to this server via scheduled cronjobs and I'm curious to see if the machine is hitting a bottleneck under load.I attempted to setup cacti on one of our LAMP servers and had a miserable time due to running PHP 5.3 and deprecated function issues.Can anyone recommend an alternative keeping in mind I have only very basic experience with SNMP?
View 1 Replies View RelatedI just switched from a basic digital camera to a more advanced one that stores both Jpeg and Raw (.Nef - it's a Nikon) files for me.When importing files in Digikam, I rename the files so that they start with Date and Time. Example: 20110121-223748.JPG for a photo taken on Jan 21st 2011 at 22:37:48.I was a bit surprised when importing both the JPEG and the Raw version of the same photo, that the filename is different by a few seconds (no constant offset, sometimes they are the same):
20110121-223748.JPG
20110121-223750.NEF
I did some "research" by looking at the exif data of both files (using "exiftool 20110121-223748.JPG" from the command line). Here is what I got back
(amongst other data):20110121-223748.JPG
File Modification Date/Time : 2011:01:21 22:37:48+01:00
Modify Date : 2011:01:21 22:37:48
Date/Time Original : 2011:01:21 22:37:48
[code]....
So it seems that Digikam is using the "File Modification Date/Time" (different in the Jpeg's and Raw's of my camera) rather than the "Create Date" (the same for both Jpeg and Raw). (The few seconds difference in "File Modification Date/Time" between the two versions of the same photo is probably due to the time that my camera needs to write away the data on the SD memory card. I guess.) Is there a way to have Digikam use the Create Date? (Or the Date/Time Original?)
PS: I'm on Ubuntu 10.04LTS, using DigiKam 1.2.0
I have an Ubuntu 11.04 Server set up for my small office whose sole job is to run as a samba file server. The problem is that it randomly hangs. For example, I can connect the clients just fine, however if left idle it tends to take a few moments to work when you try to go back into the shared folder or drive. The client will behave like it is disconnected and is trying to search for the drive only to a few moments later, go right back to normal behavior.
If I ping this server while this is happening my requests will time out for a little while and then just start working. The same thing happens when I try to connect with Putty or through WebMin. One second its unreachable then the next its fine.I have already tried swapping out patch cables (which actually seemed to work for about 2 weeks) and I have patched it to an alternate port on the switch. The only thing I have not done at this time is to change the NIC.All the clients are running Win7 with the exception of two XP machines.Simply put, its like the machine just goes dormant for a while until you ping at it for a while to wake it back up.
Both server (web01), in DMZ, and client (app01), intranet, are Linux OEL5.5.
A few hints follows:
1- I ssh to web01, in DMZ, from a intranet machine on my desktop, export my DISPLAY, and try to bring up xclock; nothing happens.
2- traceroute from web01 to app01 fails.
3- NFS mount from web01, nfs server, to nfs client, app01, fails.
How should my network configuration on web01 or app01 be to make this NFS mount from web01 to app01 work?Each server has 4 NICs, I've only used eth0.
I run a CentOS 5.1 using VMServer on XP. From home I can successfully 'cvs login' to my CVS server. But starting 'cvs update', the connection times out.
Netstat shows the connection as established:
# netstat -an | grep 2401
tcp 0 0 192.168.1.35:58651 85.25.xx.xx:2401 ESTABLISHED
CVS server is domain managed with dnsalias service (dyndns.org)
Using the same computer at work (other ISP) I have no problems - cvs update works just fine.
Can I assume that it is not a port/firewall issue, since "cvs login" is successful? Any clues where to start diggin'?
I've just recently upgraded to lenny using aptitude (following the instructions on debian.org. All went smoothly, and almost everything is working fine.I have had my X server crash twice since then, both times when I simply doubleclicked a running application. There is some indication of problems in the Xorg.log file, but it's not helpful to me -- can you help me understand?
View 2 Replies View RelatedI have no idea why, but my website times out as soon as you try to use the cart.(which invokes SSL)It seems to have happened after I went live from testing, but I checked my configurations, and I can't see anything awry in my apache2 confs.Has anyone run into this before?I don't know how or why this happened or how to even begin fxing it
View 9 Replies View RelatedSeismicmike here. My first post. I'll try to be as clear and concise as possible. For the sake of this post, I'm going to use 1.2.3.4 as a place holder for my public IP. On my web server, I would like to be able to access the /var/ftp directory through a web browser. I have successfully done so with Google Chrome, but I cannot access the directory in Firefox or IE. Both FF and IE ask me for authentication but then time out attempting to load the directory.
I suspect that there may be something up with switching to passive mode and/or that this issue may be more with my configuration of Firefox and not with the server (seeing as how Chrome works). Another possibility may be related to SSL. When I connect with FileZilla, I have to use the FTP over Explicit SSL/TLS option in order to connect. In any case I still would like to fix it. I would also like to avoid having to install FireFTP if at all possible.
Steps to reproduce (not that you can without my actual IP =J):
* Open Chrome
* Go to ftp://1.2.3.4
* Enter username
* Enter password
[code]....
I recently setup a new Linux server running Fedora 10. For some reason all ping response times are rounded to the nearest 10ms. For example, running the simple command "ping yahoo.com" give the following sample results:
64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=12 ttl=57 time=60.0 ms
64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=13 ttl=56 time=50.0 ms
64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=14 ttl=56 time=40.0 ms
64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=15 ttl=56 time=50.0 ms
I could post a larger result set but its all the same... every response is rounded to a multiple of 10ms. This wouldn't be a big deal except that the server is running Nagios for monitoring so accurate stats are important. The Nagios check_ping and check_icmp commands are also returning rounded off results. How can I get ping to simply respond with the actual response times rather than a rounded off number?