Ubuntu :: Orphaned Inodes (Each Time The Number Is Different) On EXT4
Dec 28, 2010
Each time I start my Ubuntu 10.10, I notice this messages in dmesg:
[Code]...
Each time the inode number is different. I made SMART tests on the disk, and all went fine. Do I have to worry? Could it be something related to a wrong shutdown? Update: I have just ran an fsck at boot, but when I logged in, the same orphan_cleanup was in dmesg.
I recently used up all my free inodes on my server. I had a bunch of mail messages that were sitting there using up a bunch, so I cleared the postfix queue. That gave me some room. What I'd like to do, is get a listing of the directories using the most inodes (or containing the most number of files), so that I can find the other culprits.Basically I want the output of "df -i" but to be able to do it recursively on a specific directory.
I know newer filesystems support crtime values, even to nanoseconds granularity. Ext4 does it, and NTFS mounted via ntfs-3g should expose it. Still, what is the command to get these values??
getfattr -d <some file>
gives me zero results, and as far as I know ls does not have means to access creation time.
getfattr -n ntfs_crtime /mnt/<some ntfs fs> gives me "Operation not opermitted"..?
I know about the difference between ctime=inode change time and crtime=creation time/file birth time. I want to migrate a NTFS partition to Ext4 without losing the creation dates...
telling me if this behavior of my openSuSE 11.2 installation is normal? I use a 64-Bit openSuSE 11.2 with kernel 2.6.31.x with root partition ext4. After adding and updating from repository kernel:/HEAD/etc to 2.6.34-rc4 I can not boot anymore due to a lack of module ext4. I thought today ext4 is stable and fix built-in in the actual kernel releases, isn't it? The error message at boot time: FATAL: Module ext4 not found. Which is right because in /lib/modules/<kernelversion>/kernel/ there is NO 'fs' subfolder. Isn't the kernel:/HEAD/ repository the official update path to get a newer major kernel? (besides openSuSE's Updates for security reasons) Do you know how I can fix it without self-compiling?
Trying to install on a: Dell Precision T3500 OpenSuse 11.3 keeps showing the following message:
"Make sure CD number 1 is in your drive"
I ordered the DVD from the OpenSuse site, so they ARE NOT burned copies. I also tried burning images, the same problem happens. No matter what I do, it does not go beyond this message.
time() API gives the number of seconds since 1970 Jan 1st 00:00:00 without considering leap seconds. How to get the number of leap seconds which needs to be considered in the value returned by time()..(gmtime will convert time_t to struct tm* and considers leap seconds. I am trying to write an API which will do the same function as gmtime)
have installed some programs from source and found no trace where and what were installed and I would like to remove those installed files. So I am looking for any script or app to list all orphaned (I mean not related to any installed package) files. I am using Ubuntu Server 9.10 without any fancy X11 stuff so console version is preferred. I have found bitbleach and computer janitor in this forum but they are X11 apps.
Several people have said that those of us who are having problems with Ubuntu (10.04) should ask some specific questions. Here is on below which I cannot get an answer to and never happens in Windows. Can any Ubuntu expert answer it for me? Would really restore my faith in Ubuntu (and go onto the other problems I have with it)I think I am running out of inodes on my eeepc701. It has happened before when I was using Xandros but now I am using Ubuntu 10.04.I get the following output:
Code: df -i Filesystem Inodes IUsed IFree IUse% Mounted on
I'm referring to Exaile, which seems to have lost the interest of the maintainters judging by how far behind they are on releases, and lack of any excuses more info on package pages.
I got a problem booting ubuntu 10.4 RC but i solved it by replacing root partirion uuid in grub boot menu then I disapled totally uuid passing to linux from /etc/default/grub . but something else i noticed why grub choosed insmod=ext2 why not ext4 specially I use now ext4 .I tried by editing the grub boot menu replacing "insmod=ext2" by "insmod=ext4" it booted and the three lines error during booting that i used to see them science ubuntu 9.10 totally disappeared . really I dont understand can anybody explane for me.and if what i did was right ,can anybody tell me how to make grub always and permenantly detect ext4 as ext4 not as ext2.
I have been having a lot of trouble lately with installing from CD/DVD. The DVD reader/writer on this laptop is new. Nevertheless, trying to install Ubuntu onto an exernal HD, I get 'input output error on sr0 logical block (a large number) After a long time the booting proceeds to a point, but I never get the actual installation started, and have to shut down manually.
The CD is fine, says the Ubuntu-checker. I just installed using my sons laptop, and there was no trouble. Question: does this indicate a motherboard failure? A memory block damaged? Do you know of a diagnostic tool I can use to check the reading of a CD/DVD?
If we update or remove some packages (in addition manually installed software), some files such as previous version dynamic lib files are left, so it may conflict with new stuffs sometimes. Is there any efficient way to remove these kind of orphaned files all, automatically?
I have inherited a wordpress theme with a folder of images that I think are no longer being used. I wanted to find the orphaned images using grep, so I wrote this script:
Code: #!/bin/bash echo $PWD for i in *.*; do cd ..
[Code].....
Its seems like I got some false positives out of it, but it worked pretty ok. I guess. :| Of course, it is not checking for images in the content of the database.
Orphan finding has to be a wheel that is already invented.
My server started acting flaky this weekend and my Webmin interface was throwing strange errors. I finally tracked it down to the fact that I was out of inodes on my primary partition. I'm fairly certain that the /tmp folder has an outrageous number of files in it.I can't do an ls on the directory because the console just sits there forever after I issue the command. I also tried to do an rm -rf on the /tmp directory and it did the same thing.
My server started acting flaky this weekend and my Webmin interface was throwing strange errors. I finally tracked it down to the fact that I was out of inodes on my primary partition. I'm fairly certain that the /tmp folder has an outrageous number of files in it. I can't do an ls on the directory because the console just sits there forever after I issue the command. I also tried to do an rm -rf on the /tmp directory and it did the same thing. how I can clear out this directory?
I just installed Lenny 505 LXDE (from the LXDE/XFCE install CD) on an old IBM Thinkpad 600E. It works fine (and fast). However, both apt-get and aptitude tell me that dozens of packages are orphaned, and these packages include essential things, such as the entirety of Openoffice and LXDE. Apt get reminding me that I can purge these packages (and wreck my system), and aptitude wants to remove them before doing anything. How can I force Debian to recognize that these packages were purposefully installed? Aptitude keep-all kind of worked. Autoremove no longer tries to remove the entire system, but deborphan still goes crazy and says all my packages are orphaned.
I had 5.4 machine. Upgraded to 5.5 today via yum upgrade. All went fine. Rebooted. Wanted to convert root partition to ext4 (I have three partitions: /boot, / and swap). All of them on software RAID 1 (root is /dev/md2). I did the following for converting
yum install e4fsprogs tune2fs -O extents,uninit_bg,dir_index /dev/md2 nano /etc/fstab # I indicated here that my /dev/md2 is of ext4
I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it?As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek.Here are some solutions I had in mind, none of which I am satisfied with:Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive.
Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)?Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print.Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case.
Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that?Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?
I updated the package libcgic-devel to a newer release of the same version. The change in the distributed files includes renaming a file cgic.html to index.html. I have both files installed now and cgic.html is orphaned.
i manage to delete some files from the system. now i need to recover them.. i know the inode # (through ext3undel) and also the size.Quote:Unfortunately, we cannot automatically obtain the name of a deleted filefrom Unix file systems - since the connection between the iNode (whichholds the MetaData, including the file namee real data is droppedon deletion. However, we can obtain a list of names from the deleted files.How can i use this information to recover the files?Also can i search the text from a partition? (file don't exists). As i need figures
My recent borked upgrade to -current inspired me to try to come up with a way to sanity-check the lib and bin dirs for broken library symlinks (possibly indicating missing libs) and for binaries and libraries that belong to no installed package, as well as missing dependencies.
This script is the result.
I've checked the script results manually, and it appears to be accurate, so I figured I'd post it here for a second opinion, and/or because others may find it useful too. I'm not aware of another popular method of doing this on Slackware, so here it is:
The Linux File system uses the file path notation to abstract how data is accessed. Path really must be an environmental variable for the applcication that converts the path name to an inode so what is this application/Daemons name?