I don't think there is a way of doing this with date or clock commands. But maybe they are writing to some file and I can take a look at the file's modification time. dmesg and /var/log/messages show nothing relevant.
How can I generate a list of files in a directory [e.g.: "/mnt/hdd/PUB/"] ordered by the files modification time? [in descending order, the oldest modified file is at the lists end] ls -A -lRt would be great:[URL] But if a file is changed in a directory it lists the full directory...so the pastebined link isn't good [i don't want a list ordered by "directories", I need a "per file" ordered list] OS: openwrt..[no perl -> not enough space for it :( + no "stat", or "file" command]
I know there exists a touch command to change the date of the files. However, I want to change the files of a directory and the directory time. Is there a command like -R. Please provide me an example of the command?
I'd like to change a files modification date "only" without changing the time. I'm aware of the 'touch' command but is seems like it only allows changing both the date and time, and not one of them. Any ideas on an easy way to change a file's modification date without also changing its time? (I have a long list of files and thus would like to run one to command to change them all)Example: Change a file's (month) timestamp from "2010-09-23 11:59:23" to "2010-10-23 11:59:23"Background: I accidentally set the wrong month on my camera and ended up with all photos having a modification timestamp with the wrong month.
I have installed thunderbird sometime back on Ubuntu.I want to know the date and time of installation. How can i get this information. I tried "stat thunderbird", but it did not give the installation time
I get the pinkish Ubuntu screen and a message such as "checking disk 1 of 4". I assume that it is doing an fsck. However, the time it takes does not seem to relate to the time it takes if I do a manual fsck (almost instantaneous) or fsck -c (several minutes to half an hour depending on the drive). I also wonder what is counts as a "disk". I have in the system:
I want to monitor an application lets say that it will be apache2 to see how many in real-time it takes network resources such as upload/download per second how can i do that in linux (cmd not gui) ? I know it's possible because i can see this in windows in my nod32 firewall monitoring.
I need this script but I don't know how to do it I have one folder with several folders inside.On each folder a have one MKV or AVI file inside...What I need is a script to change the "modification date" of each folder to the "modification date" of each MKV or AVI that the folder has inside.
Sometimes at startup I get this message "Checking disk 1 of 1". Does that mean it's checking all partitions on the hd? After a bad shutdown there is no prompt for fsck to run and the system just boots up. In fstab I have both options set to "1" for the partition Ubuntu is on, all others set to "0". Any ideas on both?
How do I check if a file is a soft link in the terminal? They're usually color coded, but I gave permissions to a colleague and now every file is green. Is there a simple command for this?
I downloaded the DVD ISO version of OpenSuSe version 11.2 64bit from openSUSE. I checked the iso file checksum after downloading and it was correct. However after burning to DVD i booted from the disc and started the install. After getting passed the initial settings and it starts to extract all of the packages each file fails the checksum and will not install. I tried downloading again on a different computer and burning again using UltraISO using the Disc-At-Once method, again checking the ISO file's checksum before burning.
It still gives the same errors. So i loaded windows and started the windows based install and my anti-virus (Kaspersky) says the disc is infected with a trojan. How can the disc be infected when the ISO file's checksum is correct?. The computer that i am using to burn the disc is virus free according to Kaspersky and Norton Technician Toolkit.
I want to create a file in the /root directory and then make sure it exists. The following code keeps telling me that the file doesn't exist even though it does.
Code: #!/bin/bash echo -e "username=someusername passwordsomepassword" | sudo tee /root/.credentials if [ -e /root/.credentials ]; then echo "File exists!"
[Code]...
[Edit] Added second double quotation mark at the end of "somepassword"
Status for ACCOUNT_MISSING_FRM_RCIS_LINK- mismatch
Status for ACCOUNT_MISSING_FRM_RCIS_LINK is ACCOUNT_MISSING_FRM_RCIS_LINK- does not exist in DB
This should appear just once as :
Quote:
The same goes for last line.
For further information the ACCOUNT_MISSING_FRM_RCIS_LINK is a table name and it row count is taken from a log and then Database checked for the rowcount to see if it is a match,mismatch,or the table does not exist!
I am getting the desird output just that i need to do something to this output file.
I have a Rad Hat 7.0 old Linux system that crashed due to power failure. On reboot the system goes to Checking Root File System and does 92.5% check and fails.
Here are the error messages I get.
I don't know what to do at this point so I say yes and it goes in some wierd mode.
SO I ran fsck manually but I get an error PARALLEIZING FSCK.
I can't fix the corrupted stuff for the system to reboot. THIS IS VITAL.
when i reboot my computer, it goes on well and loads Ubuntu but instead of it going to the login it doesn't. instead it begins checking the file system and then completely stops at
[Code]....
i am using ubuntu 8.1, please i love that ubuntu version and it's why i haven't upgraded yet because i have tried the other versions and i didn't really like them. i have a 40GB HD and 512MB RAM, 1.8GHZ processor, i am using pentium 4.
I have a bash script that checks for contents in a folder every 15 seconds and then acts on it's contents. This works great for the average size file however on very large files it starts acting on the file before it's completely written. Is there a facility in bash shell to get a file complete signal or such? here is trigger to launch a larger script.
Code:
#!/bin/sh while true do $HOME/bin/hpgl.sh >/dev/null 2>&1 &
I downloaded Go-OpenOffice from SlackBuilds.org, but I can't build it. make terminates with configure error: checking for C compiler default output file name. configure: error: in `/tmp/SBo/ooo-build-3.1.1.5': configure: error: C compiler cannot create executables See `config.log' for more details.
(I can't find config.log anywhere) I use a quite 'light' installation (no xap, ap), and I suspect that I have some unmet dependencies, but the error message provides no information about what software is needed (I've installed all dependencies listed on SlackBuilds.org). I'm using Slackware64-13 with Xfce
If you have a contiguous partial piece of an ext4 file system (assuming it's perfectly clean), starting from the beginning of the partition, is there any way to check it, or to mount it to get the files whose parents, inodes and data are all completely contained inside?
Have (or maybe had) a very large 11TB RAID 6 array, filled with a single large ext4 partition. Something strange happened when a single drive failed and the array ended up failing 13 out of the 11 drives. I had trouble getting the array restarted, and got to the point where I exhausted all of the options I considered completely safe. I considered a few things that may have worked, but mdadm doesn't seem to have a definite "do not change anything" option. So I decided the only way to be absolutely safe would be to clone the disks before proceeding - then I realized how much time that would take and sent the drives off to a recovery service so they could image them and check it out.
Before doing so, I copied the first 2GB from each disk. I XORd the images from the working drives to reconstruct the data chunks that were on the failed disk, manually assembled the chunks, and am very confident that I have 22GB of "correct" data in a single file. The parity and Q syndromes all matched (with RAID 6 you can still check with only 1 missing device). I've learned the fine details of ext4 from [URL], and have looked at lots of raw data from the reconstructed partition, and it all looks good. The recovery company says that they're not finding many inodes, but I found a lot of them, exactly where they're supposed to be. I tried to mount and e2fsk, but both processes seem to be extremely unhappy that the device size doesn't match the size implied by the file system geometry.
I considered hacking the superblock to manually reduce the size, but I figure that wouldn't work because there would then be more group descriptor blocks than it would expect after the superblocks. I might try doing that and compensating by incrementing the "reserve block count" to compensate. Alternatively, if there is some way to make the file appear to be the expected size with nothing but zeroes after the end of the actual data, maybe I could mount it and not get any errors until I cause the kernel to read past the true end of the file.
I just switched from a basic digital camera to a more advanced one that stores both Jpeg and Raw (.Nef - it's a Nikon) files for me.When importing files in Digikam, I rename the files so that they start with Date and Time. Example: 20110121-223748.JPG for a photo taken on Jan 21st 2011 at 22:37:48.I was a bit surprised when importing both the JPEG and the Raw version of the same photo, that the filename is different by a few seconds (no constant offset, sometimes they are the same):
20110121-223748.JPG 20110121-223750.NEF
I did some "research" by looking at the exif data of both files (using "exiftool 20110121-223748.JPG" from the command line). Here is what I got back
(amongst other data):20110121-223748.JPG File Modification Date/Time : 2011:01:21 22:37:48+01:00 Modify Date : 2011:01:21 22:37:48 Date/Time Original : 2011:01:21 22:37:48
[code]....
So it seems that Digikam is using the "File Modification Date/Time" (different in the Jpeg's and Raw's of my camera) rather than the "Create Date" (the same for both Jpeg and Raw). (The few seconds difference in "File Modification Date/Time" between the two versions of the same photo is probably due to the time that my camera needs to write away the data on the SD memory card. I guess.) Is there a way to have Digikam use the Create Date? (Or the Date/Time Original?)