Ubuntu :: List Directories By Number Of Inodes Used?
Mar 21, 2010
I recently used up all my free inodes on my server. I had a bunch of mail messages that were sitting there using up a bunch, so I cleared the postfix queue. That gave me some room. What I'd like to do, is get a listing of the directories using the most inodes (or containing the most number of files), so that I can find the other culprits.Basically I want the output of "df -i" but to be able to do it recursively on a specific directory.
I'm looking for a way to produce a list of all the directories in the current working directory sorted by the total number of files that are contained with them.
Initially I though that Nautilus could be used for this, but then I realised it doesn't count files in the sub directories.
The best I've got for a command line solution so far is this
Code:
The use case for this is a situation where a user has a quota applied to their home directory which limits the number of files they are allowed to have and they have exceeded that limit.
Each time I start my Ubuntu 10.10, I notice this messages in dmesg:
[Code]...
Each time the inode number is different. I made SMART tests on the disk, and all went fine. Do I have to worry? Could it be something related to a wrong shutdown? Update: I have just ran an fsck at boot, but when I logged in, the same orphan_cleanup was in dmesg.
I have a drive NTFS formatted drive mounted in Debian and a script set to make directories via a linux script and cron job. This was running flawlessly till recently, when I started getting 'mkdir: cannot create directory 'dirname': Operation not supported'.I have been unable to find any help with this issue online. I assume it is a NTFS mount issue. I know it isn't about file size, as the drive it 1TB.Here is my fstab for reference:Code:/dev/sdb1 /mnt/backupLocation ntfs-3g rw 0 0Do you know what the issue is here? Like I said, up until recently I have been able to create seeming unlimited directories here, but it now stops after about 7 - 10 directories have been created?
I'm looking to get a shell script to loop through a number of directories and subdirectories,looking for files that contain a particular substring, and renaming the file by replacing the search string with a different substring. For example if you had a directory full of folders that contained digital photos (along with various other files which would need to remain unaffected), and the intent was to remove the "DSC_" prefix from several thousand files buried within. I've whipped up a rather long-winded solution that works well for this purpose but chokes on directory names with spaces. I am reasonably sure there's a 2 or 3-liner that would accomplish this exact same task.
function investigate { path=$1 for file in `ls $1` #for file in *
I have a large music directory that I'd like to somehow acquire, or generate, a list of each sub-folder within it, and then somehow get the list into a spreadsheet format. Is there a way to do this?
I need a command to list the total sizes for all the directories in a mounted drive.I tried df and du.df list the total size for the mounted drivedu depending on what option I give it either list the total size or list all the sizes for every file on the drive.All I want to know is the sizes of all the directories on the mounted drive.This is a windows vista hard-drive and for some reason ubuntu is reporting a 50 GB partition and only 10GB free, I want to know what's taking up all the free space. I can't find anything in the file browser, so far I've only managed to count up about 10GB of used space so where is the other 30GB.
I have generated a list of directories that I would like to use ls and grep on, but it is not working. I am using the commandCode:cat directories.dat | xargs lsand I get a whole lot of these errors:Code:ls: cannot access ./foo/bar/baz/grault/*: No such file or directorybut when I try the directories manually one at a time I find that they all exist and all have files in them. Same thing if I try to grep anything. What is going wrong?
I am trying to write a script to pick the directory name from a list of file. Here is a detailed picture.Have a file name LIST which contains the follwing for example/apps/oracle/product/test1/apps/oracle/product/test2/apps/oracle/product/test3I need a script that reads these line from LIST and creates foldersin /apps/oracle/product/test1/backup/date/test1 after reading the first line /backup/date/test2 after readin the second line/backup/date/test3 and so on.
how to write a short script file to read file (text) contains a list of directories name and delete everything in it. There are 10,000 directories - So there is NO WAY I can do manually.
I need to, through a bash script, go through a given directory (given as argument 1) to list out the relative path in this directory (including $1) for eact subdirectory which contains files. Directories which only contain . .. and eventually only subdirectories SHALL NOT be listed. It is this last requirement that makes it difficult for me.
I have been using the tree command for now, but I have not found a way to ignore paths to directories which only contains other subdirs or nothing at all in any easy way. I may offcourse test each directory after they are listed but this gives an extra loop to go through and I beleive it should be possible to do it directly when creatring the list. I guess by using find or ls in conjuntion with the tree command or by itself it should be possible but I am not to conversant of nested script commands.
But when I install new updates for Ubuntu and restart my computer to apply those updates.... a new Ubuntu with different number appears on the list of the operating systems that I need to choose from
[Code]...
does this mean every time I updated Ubuntu I'll have additional OS on my device?
Is there any Linux application for finding the folders with the most number of files? baobab sorts folders by their total size, I'm looking for a tool that lists folders by the total number of files in it.
The reason I'm looking is because copying tens of thousands of small files is excruciatingly slow (much slower than copying a few large files of the same size), so I want to archive or delete those folders with high file counts that that will be slowing down the copying (it won't speed things up now, but it would be faster when I need to move/copy it again in the future).
I am trying to write a simple back up script in python where I try to list the files that are 24 hours old in specific directories that I would choose.I read the manual of find and used
find . -mtime 1 > log.dat
to get the list of files in the log.dat however I also get the path information in that list as such
In my project i cannot determine the number of check list initially. I will know dynamically during execution.How to specify the number of check list dynamically in zenity.
I am using the package Quantum espresso to get electron phonon coefficients for monolayer graphene. While applying one of the executables, I got the error: "At line 356 of file q2r.f90 (Unit 51 "a2Fq2r.51") Traceback not availabel: compile with -ftrace=frame or -ftrace=full. Fortran Runtime error: bad real number in item 1 of list input
Several people have said that those of us who are having problems with Ubuntu (10.04) should ask some specific questions. Here is on below which I cannot get an answer to and never happens in Windows. Can any Ubuntu expert answer it for me? Would really restore my faith in Ubuntu (and go onto the other problems I have with it)I think I am running out of inodes on my eeepc701. It has happened before when I was using Xandros but now I am using Ubuntu 10.04.I get the following output:
Code: df -i Filesystem Inodes IUsed IFree IUse% Mounted on
How can only directories be listed, that do not have another child directory?
Imagine a structure like /A /A/AA /A/AB /A/AB/ABB /B /C /C/CC /C/CC/CCC /C/CC/CCC/CCCC I would like to use find to list only /A/AA /A/AB/ABB /B /C/CC/CCC/CCCC.
The starting point would be find . -type d, but neither -mindepth nor -maxdepth can be used, can -noleaf help (I could not get it to react the way I wanted it to)?
In Linux bash shell, for a given directory, how can I list:The create date for that directory The number of files in that directory The number of subdirectories in that directory.
My server started acting flaky this weekend and my Webmin interface was throwing strange errors. I finally tracked it down to the fact that I was out of inodes on my primary partition. I'm fairly certain that the /tmp folder has an outrageous number of files in it.I can't do an ls on the directory because the console just sits there forever after I issue the command. I also tried to do an rm -rf on the /tmp directory and it did the same thing.
My server started acting flaky this weekend and my Webmin interface was throwing strange errors. I finally tracked it down to the fact that I was out of inodes on my primary partition. I'm fairly certain that the /tmp folder has an outrageous number of files in it. I can't do an ls on the directory because the console just sits there forever after I issue the command. I also tried to do an rm -rf on the /tmp directory and it did the same thing. how I can clear out this directory?
I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it?As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek.Here are some solutions I had in mind, none of which I am satisfied with:Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive.
Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)?Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print.Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case.
Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that?Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?
i manage to delete some files from the system. now i need to recover them.. i know the inode # (through ext3undel) and also the size.Quote:Unfortunately, we cannot automatically obtain the name of a deleted filefrom Unix file systems - since the connection between the iNode (whichholds the MetaData, including the file namee real data is droppedon deletion. However, we can obtain a list of names from the deleted files.How can i use this information to recover the files?Also can i search the text from a partition? (file don't exists). As i need figures