My server started acting flaky this weekend and my Webmin interface was throwing strange errors. I finally tracked it down to the fact that I was out of inodes on my primary partition. I'm fairly certain that the /tmp folder has an outrageous number of files in it. I can't do an ls on the directory because the console just sits there forever after I issue the command. I also tried to do an rm -rf on the /tmp directory and it did the same thing. how I can clear out this directory?
My server started acting flaky this weekend and my Webmin interface was throwing strange errors. I finally tracked it down to the fact that I was out of inodes on my primary partition. I'm fairly certain that the /tmp folder has an outrageous number of files in it.I can't do an ls on the directory because the console just sits there forever after I issue the command. I also tried to do an rm -rf on the /tmp directory and it did the same thing.
Several people have said that those of us who are having problems with Ubuntu (10.04) should ask some specific questions. Here is on below which I cannot get an answer to and never happens in Windows. Can any Ubuntu expert answer it for me? Would really restore my faith in Ubuntu (and go onto the other problems I have with it)I think I am running out of inodes on my eeepc701. It has happened before when I was using Xandros but now I am using Ubuntu 10.04.I get the following output:
Code: df -i Filesystem Inodes IUsed IFree IUse% Mounted on
I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it?As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek.Here are some solutions I had in mind, none of which I am satisfied with:Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive.
Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)?Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print.Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case.
Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that?Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?
I recently used up all my free inodes on my server. I had a bunch of mail messages that were sitting there using up a bunch, so I cleared the postfix queue. That gave me some room. What I'd like to do, is get a listing of the directories using the most inodes (or containing the most number of files), so that I can find the other culprits.Basically I want the output of "df -i" but to be able to do it recursively on a specific directory.
Each time I start my Ubuntu 10.10, I notice this messages in dmesg:
[Code]...
Each time the inode number is different. I made SMART tests on the disk, and all went fine. Do I have to worry? Could it be something related to a wrong shutdown? Update: I have just ran an fsck at boot, but when I logged in, the same orphan_cleanup was in dmesg.
i manage to delete some files from the system. now i need to recover them.. i know the inode # (through ext3undel) and also the size.Quote:Unfortunately, we cannot automatically obtain the name of a deleted filefrom Unix file systems - since the connection between the iNode (whichholds the MetaData, including the file namee real data is droppedon deletion. However, we can obtain a list of names from the deleted files.How can i use this information to recover the files?Also can i search the text from a partition? (file don't exists). As i need figures
The Linux File system uses the file path notation to abstract how data is accessed. Path really must be an environmental variable for the applcication that converts the path name to an inode so what is this application/Daemons name?
I configure named and stumble upon the following problem: named is serious about user rights, every config file named uses should be named:named. I set rights to named:named as follows, but they get changed to root:named when I restart named as root. The same thing happens with SELinux context. This results in access denied type errors.
I've recently installed CentOS 5, because I needed a good OS to run VMware Server. Just a heads up, I'm not very familiar with RH/CentOS distros, I usually use Arch Linux. My VMware install went fine, config is standard settings. Now I'm trying to access the VMware Infrastructure Web Access using port 8222 (http) and 8333 (https), but it's a no go. I'm connecting from another machine on the LAN, as the CentOS box is headless. I restarted the vmware services, and they seem to be launching fine. I don't know much about VMware though. I verified with netstat and ports 8222 and 8333 are listening.
I have searched google for a couple of days, and I keep hearing about an INode limit on filesystems, but that doesn't seem to be the case.
Now whenever I try to download something, watch a ..... video, or listen to Pandora radio, it just stops playing after 2 seconds. Downloading says "No space left on device".I also get the error as root.I do have 5% and more free of HDD space. After reading the similar posts I checked all of this, so if I am overlooking something on the forum, I apologize for an extra post about.
This is my first post, I hope I'm the the right place. I installed mysql mysql-server php-mysql perl-DBD-mysql libdbi-dbd-mysql via "yum install -y" on a server running CentOS 5.3 X86_64 The install completes successful with no errors, but once I start mysqld via "chkconfig --level 35 mysqld on" ; "service mysqld start" There are no errors in /var/log/mysqld.log netstat shows mysqld listening on 3306 and localhost is in /etc/hosts
I am trying to install CentOS-DS on version 5.4 x86_64. I cannot get to the Extras repo due to lack of wired Internet access. I have wireless (except to server) and I have big UFD drives.
I have an MSI K9A2 Platinum mobo, which has a 10/100/1000 Fast Ethernet Realtek 8111B built-in, a D-Link DIR-655 Router and a DSL modem. Compared to Windows Visya and other Linux distros ( Fedora 11, Suse 11.1, Mandriva 2009.1 ) access to the internet is much slower. It seems there is a noticeable delay when running CentOS 5.3. before internet access kicks in each time I am surfing the web or updating my system.
Is there any way I can speed things up, or determine why CentOS 5.3 seems much slower ?
I can't seem to load anything in the CGI bin, it gives me a 403 forbidden for the cgi bin folder, and also when i try to load the hello.pl it wont load it just shows a blank document. permissions are set to 755 and i reuploaded the files via ftp too
heres my httpd: # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" #
I can see a cd-rw when I go computer and when I click on it I get the error "unable to mount media, there may be no media in drive" and then I can see that cd-rw plus a dvd-rw also. I have a dvd and cd writer yet I am unable to mount them even though they both show up in the device manager and also as their correct models. Because of that when I try to install some software I get errors such as:
"file:///media/cdrecorder/repodata/repomd.xml: [Errno 5] OSError: [Errno 2] No such file or directory: '/media/cdrecorder/repodata/repomd.xml' Trying other mirror. file:///media/cdrom/repodata/repomd.xml: [Errno 5] OSError: [Errno 2] No such file or directory: '/media/cdrom/repodata/repomd.xml'
I am still somewhat new to Linux and having recently switched from Mandriva where I didn't need to do anything special but I am thinking maybe I am missing some drivers.
I have a directory in /var/www/html with 777 permissions... when I try to access from another machine (on the same network - local) I get a permissions error I thought if I open recursively (chmod -R 777 directory_name) that I should be allowed to see the pages...
CENTOS 5.4 hp server 380G4 Fresh install I can ssh into the server from within the company, I can access the "website" on 192.168.1.110 (server static IP) command line on the server I can ping google.com What I cannot do is reach the server from outside the company, router, forwarded port 22 and www to 192.168.1.110
I use no-ip for my dynamic IP forwarding, this is all setup correctly.
I have CentOS5.5, it has full access to internet until I did something to it I couldn't figure out. This is not a cable issue since I could ping and ssh to other machines within my lan. Apache is also running fine, which is probably irrelevant. When I type traceroute cnn, it simply hangs, later saying [URL]: Temporary failure in name resolution Cannot handle "host" cmdline arg '[URL]' on position 1 (argc 1) This is a desktop version of CentOS5.5, when i do Administration - Network, it says eth0 is active and Primary and Secondary DNS are set correctly to my comcast DNS server at 68.94.15x.1
is it possible to use yum in any way when a computer is not connected to the internet(but when full install media is available; for example the dvd)? Or should I resolve all dependencies manually using rpm?
I set up freenx server and set up a client on same machine and now when I go to my logs it crashs and bugzilla save before it can crash. I go to send info and the page says bugzilla wont work and gives a 999 code need newer gnome??
Where is the root access log located in centos 5? I am running a server with whm/cpanel , I need to check to see ip addreses logged over the past 4 weeks. if there isnt such a log file how can I create one?
I have my centos server behind a Linksys router. I'd like to access the router's web interface to open and close certain ports. How do I access the router's interface ?
Prior to this, I had a windows machine. I could type in the router's IP in 192.168.1.1 in the browser and get access to the interface.
I'm planning to use a virtual CentOS box for web development (to use the same software as on the real server). I configured Samba to have root guest access to /var/www/ but it doesn't let me in /var. Chmod 777 doesn't help. Nethertheless, I have full access to /sbin and /etc.