Slackware64 13.0 I have a website that has been migrated to a new server. After a few days I'm still pointing at the old server and the hosting company have recommended that I flush my dns cache. A quick google for this in linux seemed to recommend restarting the nscd daemon but nscd -g tells me it's not running anyway. I connect to Virginmedia via wired ethernet to a Belkin N1 router so is there a way I can do this I wonder, either via Virginmedia or Slackware?
I have recently upgraded my Debian unstable to the new custom 2.6.34 kernel. The system is unresponsive and top shows two processes (flush-8:0 & flush-8:16) taking up most of the cpu time.
Is it possible to remove the "flush" flag when mounting removable disks in KDE4 without recompiling KDE4? Can it be done in some config file(s)? Thanks!
policy is to backup mysql with mysql-zrm.However at a certain stage it hangs forever. This is at the "flush logs". I tried this manually and it gave the same result. Even after restarting mysql and the host. After some googling and trying I found out "flush tables with read lock" gives the same result. The tables seem to be MyIsam. I tried with a mysqldump on one server and restore it on a test vm. I used the same config and flush logs still hangs. ALso I tried to change some configuration directives... but with the same result
Edit: btw, I checked the logfile and didn't found anything (/var/log/mysqld.log)
Edit2: I also did myisamchk -s *.MYI (in all direcoties with db files;actually did it with find command) and it did not return anything so datafiles seem ok.
I'm doing a bit of housekeeping and tidying up of programs I no longer need/use, and as some of these were installed using sbopkg I thought I would also tidy those. However when attempting to view obsolete sources via the utilities sub-menu, sbopkg seems to crash to a cli interface from the ncurses one and spews continuously the error below. This thern continues until I CTRL+C the process which of course then leaves the sbopkg pid file still showing active in /var/run. Anyone else come across this, and a possible way to prevent it.
/usr/sbin/sbopkg: cannot make pipe for command substitution: Too many open files stty: standard input: Bad file descriptor /usr/sbin/sbopkg: line 569: read: read error: 0: Bad file descriptor stty: standard input: Bad file descriptor /usr/sbin/sbopkg: redirection error: cannot duplicate fd: Too many open files
I don't understand this error nor do I know how to solve the issue that is causing the error. Anyone care to comment?
Quote:
Error: Caching enabled but no local cache of //var/cache/yum/updates-newkey/filelists.sqlite.bz2 from updates-newkey
I know JohnVV. "Install a supported version of Fedora, like Fedora 11". This is on a box that has all 11 releases of Fedora installed. It's a toy and I like to play around with it.
I was laughing about klackenfus's post with the ancient RH install, and then work has me dig up an old server that has been out of use for some time. It has some proprietary binaries installed that intentionally tries to hide files to prevent copying (and we are no longer paying for support or have install binaries), so a clean install is not preferable.
Basically it has been out of commission for so long, that the apt-get upgrade DL is larger than the /var partition (apt caches to /var/cache/apt/archives).
I can upgrade the bigger packages manually until I get under the threshold, but then I learn nothing new. So I'm curious if I can redirect the cache of apt to a specified folder either on the command line or via a config setting?
I installed squid cache on my ubuntu server 10.10 and it is work fine but i want to know how to make it cache all files like .exe .mp3 .avi ....etc. and the other thing i want to know is how to make my client take the files from the cache in the full speed. since am using mikrotik system to use pppoe for clients and i match it with my ubuntu squid
I have an HP printer for my Lenny which has worked for some year. But I don't remember what method I used to install it. So this is one piece of the puzzle that I can't see. But like I said the printer works. One day I accidentally printed more than I had papers in the printer-machine. Then I kind of stacked a lot of print jobs in the queue out of frustration. So whenever I reboot the PC/Lenny then it waste some paper by printing things that got stuck in the printer queue. It's not very environmental this weird behavior.
So next time this happens how do I flush the Printer queue so Lenny doesn't remember what happened before the reboot? I followed these instructions earlier but it only switched one weird behavior with another weird behavior. So it didn't work for my Lenny, and I couldn't find any better solutions on the Internet. [URL]...
I believe I have unwanted ' characters left in a 9 element character array that are causing subsequent operations with it to fail. I see wildly differing views on the web on the proper way to flush 'em. It's clearly not as simple as it would appear at first sight. What's currently the best (or else "least deprecated") method?
i was looking for a way to stop my menus taking a few seconds to load my icons when i first open them and found a few guides suggesting using the gtk-upate-icon-cache command, but with the any colour you like icon theme i'm using (stored in my home folder .icons directory) i kept getting a "gtk-update-icon-cache: The generated cache was invalid." fault i used the inbuilt facility in the acyl script to copy the icons to the usr/share/icons directory and tried the command again, this time using sudo gtk-update-icon-cache --force --ignore-theme-index /usr/share/icons/ACYL_Icon_Theme_0.8.1/ but i still get the same error. i tried with several of the custom icon themes i've installed and only 1 of the first 7 or 8 i tried successfully created the cache.
Poking around reveals that one Dave Chinner, Principal Engineer -- SGI Australian Software Group, submitted a series of patches to deal with preventing the flushing of stale inodes. I'm trying to figure out just how relevant those patches are to the stability of XFS, especially to pre-Lucid Ubuntu systems: are they in danger of data loss? Some quick search results thrown together in an effort to asses this issue:[URL]..Quote:
Dave Chinner (9): xfs: Don't flush stale inodes xfs: Ensure we force all busy extents in range to disk xfs: reclaim inodes under a write lock xfs: Avoid inodes in reclaim when flushing from inode cache
why does my virtual machine freezes when I flush iptable rules. i tried to install virtual machines 3 times and every time I flush iptables on host, virt machine freezes down. What can be the issue? Is it with the host installation or something else?
I am attempting to write a backup script that will do the following:
1) lock and flush tables on a mysql db 2) dump the db to a file 3) unlock the tables 4) rsync the file to offsite storage
It all seems to be going well. However, obviously I don't want to setup ssh to the storage server on another network as the root user without a password. so I am attempting to su as the backup user inside of the script but when I try to run the script everything happens as it should until I try to so.. then it jumps out of the script .. akss me to login as the backup user.. proceeds to rsync to the offsite storage it does all this and then resumes execiting the script. it is not going to be setup as a cron job. it will be executed manually. assuming that is the case, how can I get the script to run without prompting for a password?
Here is what I've come up with so far... assuming that the script is run as root and the identity of the backup user will need to be assumed inside the script without perstering the user to enter the backup user's password.
On Windows I have used XBBrowser, which provides a custom version of Firefox suited to using Tor.XBBrowser provides a button, flush tor circuit, which will setup an entirely new connection and exit node.I am wondering how to do the equivalent thing on Linux. ALl I can do is restart tor, which does not seem to make any difference.
My comp is litterly doing nothing, and 61% of my RAM is being used as cache. I don't know if I created a swap space correctly. I loaded up gparted and I see that I do have a 2gb partition labled linux-swap. Why am I completely out of RAM? I have 4gb btw
I am using Fedora 11 64-bit. The problem is, when I start to install any updates or new packages, the system return the following error:No package cache is available.The package list needs to be rebuilt.This should have been done by the backend automatically.
Cannot retrieve repository metadata (repomd.xml) for repository: fedora. Please verify its path and try again.I am completely new to linux, meaning I have never, ever, used a linux based system. I have recently accustomed to myself with only the basic commands (ls, yum, help, g++ etc.)
I've been troubled by the high amount of RAM that is used by my Ubuntu desktop. Before I installed Ubuntu 10.04 (desktop, 64-bit) my system used around 500MB of RAM when no applications were open (just cold boot into the GNOME under 9.10). But when I installed Lucid, I've noticed that my used memory is now reported as 1GB or more as soon as I log into the system.
I want to figure out what is using all the extra RAM but I can't seems to find the culprit. I looked at all of the processes and numbers just don't add up. I exported the list of processes into a file and summed up the memory used by every process in a spreadsheet. The total came to around 700MB. Yet, both System Monitor and "free" reported the time that system was using over 1GB of memory. This means that at least 300MB of RAM are used but not by any process, at least as reported by "ps".
I was wondering if there is a way to know which files are in the linux page cache. I've searched but the only thing I've found is meminfo.Is there anything out there that can give more details regarding what is being cached?
when i type arp -n it shows nothingif i type arp -vit showsskipped :0 found :0I want to look at my ip and mac addresses in arp cacheif i change my ethernet ip address it is not reflected or stored in the cache.