Whenever I plug-in my USB 8gb pendrive to copy a large file, it takes eons. Using Ubuntu 9.10 for AMD64. At first it copies a file at high speeds, because it's not actually writing to the USB but to the cache; at 500-700mb, the write cache is full and flushes, *while* still copying the rest of the file. Since the system is trying to access the USB in two different places at the same time, the transfer speed bunks down, often to 200-300kb/s. The final process, instead of taking 2 minutes, takes one hour.
This continues until 1.4gb, when the cache finishes the flush, the copy can continue streamlined and the last 200-300mb are copied again at quicker speeds. I've made a test, copying a 1.6gb file to the USB and waiting for the cache to get full. Once it happens, I copy another file of the same size to the USB at the same time, when it starts copying I cancel the previous copy - cancelling it could take 1-2 minutes, but once it's cancelled, the second copy goes amazingly quick since the pendrive doesn't have any access troubles anymore.
However, that means for the trick to work that I can't copy four 1.6 files to the 8gb pendrive, but only three. The fourth one would need an hour unless I start copying it just before the first three finish so the write cache is still not in use or something like that. So what I would want is that everytime I plug-in this specific USB drive (identified by its label or UUID), the automount parameters disable write caching for this drive. Is that feasible? Using fstab or how?
I want to be able to easily mount and unmount my pendrives, and I want them to be automounted as soon as they are plugged in.However:The plasmoid Device Notifier does not automount. The plasmoid device notifier with automount is not being developed anymore (AFAIK) and I haven't been able to install it. Whenever I mount a pendrive with the Device Notifier and write sth to it with thunderbird, I can't unmount it afterwards.I don't know how to get ivam working. Halevt is not in the repositories.Therefore, I tried with the following messed up alternative:1) I placed the following script in the autostart folder:
I don't understand this error nor do I know how to solve the issue that is causing the error. Anyone care to comment?
Quote:
Error: Caching enabled but no local cache of //var/cache/yum/updates-newkey/filelists.sqlite.bz2 from updates-newkey
I know JohnVV. "Install a supported version of Fedora, like Fedora 11". This is on a box that has all 11 releases of Fedora installed. It's a toy and I like to play around with it.
I was laughing about klackenfus's post with the ancient RH install, and then work has me dig up an old server that has been out of use for some time. It has some proprietary binaries installed that intentionally tries to hide files to prevent copying (and we are no longer paying for support or have install binaries), so a clean install is not preferable.
Basically it has been out of commission for so long, that the apt-get upgrade DL is larger than the /var partition (apt caches to /var/cache/apt/archives).
I can upgrade the bigger packages manually until I get under the threshold, but then I learn nothing new. So I'm curious if I can redirect the cache of apt to a specified folder either on the command line or via a config setting?
I installed squid cache on my ubuntu server 10.10 and it is work fine but i want to know how to make it cache all files like .exe .mp3 .avi ....etc. and the other thing i want to know is how to make my client take the files from the cache in the full speed. since am using mikrotik system to use pppoe for clients and i match it with my ubuntu squid
i was looking for a way to stop my menus taking a few seconds to load my icons when i first open them and found a few guides suggesting using the gtk-upate-icon-cache command, but with the any colour you like icon theme i'm using (stored in my home folder .icons directory) i kept getting a "gtk-update-icon-cache: The generated cache was invalid." fault i used the inbuilt facility in the acyl script to copy the icons to the usr/share/icons directory and tried the command again, this time using sudo gtk-update-icon-cache --force --ignore-theme-index /usr/share/icons/ACYL_Icon_Theme_0.8.1/ but i still get the same error. i tried with several of the custom icon themes i've installed and only 1 of the first 7 or 8 i tried successfully created the cache.
My comp is litterly doing nothing, and 61% of my RAM is being used as cache. I don't know if I created a swap space correctly. I loaded up gparted and I see that I do have a 2gb partition labled linux-swap. Why am I completely out of RAM? I have 4gb btw
I've just installed Ubuntu 10.10 amd64, and can't get DVD creator to work. I just slide in a blank DVD-R, and copy/paste the files onto it, and click "write on disc", the usual way... It does it, and then goes on with a checksum calculation which is almost longer than the copy itself. And then it says the copy failed, and sure enough, all files are corrupted.
PS: I don't know how it happens, but with every new relase of Ubuntu they break something with UDF DVDs. 10.04 couldn't read UDF DVDs until the RC stage, a few days before the actual release (they'd broken the /etc/fstab)10.10 couldn't read UDF in the early beta stages. And now even after release it still can't burn data DVDs.
I've been troubled by the high amount of RAM that is used by my Ubuntu desktop. Before I installed Ubuntu 10.04 (desktop, 64-bit) my system used around 500MB of RAM when no applications were open (just cold boot into the GNOME under 9.10). But when I installed Lucid, I've noticed that my used memory is now reported as 1GB or more as soon as I log into the system.
I want to figure out what is using all the extra RAM but I can't seems to find the culprit. I looked at all of the processes and numbers just don't add up. I exported the list of processes into a file and summed up the memory used by every process in a spreadsheet. The total came to around 700MB. Yet, both System Monitor and "free" reported the time that system was using over 1GB of memory. This means that at least 300MB of RAM are used but not by any process, at least as reported by "ps".
I was wondering if there is a way to know which files are in the linux page cache. I've searched but the only thing I've found is meminfo.Is there anything out there that can give more details regarding what is being cached?
I had occasion to boot Ubuntu livecd yesterday on a Dell machine with XP installed. I pressed the Escape key during boot to see what might be happening behind the Graphic ...... and got a rather upsetting surprise ..... ... apparently Ubuntu was reading and writing to the XP partition ..... without any notification to me. I don't have the exact wording to hand, but what is said was that the XP install was 'dirty' and it was fixing it .... fixed! I used a 9.10 CD. Is there any information out there that addresses this behaviour? Is there a way to boot the CD and prevent this from happening? I do not expect or want ANY liveCD to write to ANY partition on ANY PC I might boot using that liveCD without my explicit permission/instruction. That this liveCD did write to a partition without asking or informing me is sufficient cause for me never to use such a liveCD again.
I've rolled a custom version of the live CD, and everything's going great on it.I was asked to make it a bootable USB stick as well.This was simple enough by installing UltraISO on my Windows box and writing the ISO to a USB stick through the 'Bootable' menu. But for some reason it boots up to a slightly less-pretty version of GRUB.That's all well and good, but now the language selection menu is gone. Is there any way to get that menu in the less-pretty GRUB? At the very least, is there a way to hard-code the language there so certain people have sticks with certain languages?
Ive got a dedicated server setup to run the deluge daemon and access it through thee webui. i can add and even delete new torrents just fine but when i go into the options and try to change values like upload speed limits etc the changes dont take. im assuming this is a permissions issue but i don't know what files or directories to change.
I have a flash card that has been corrupted and have obtained a working .img file. I was told that I could not just drag and drop the .img to the card and that it needs to be written bit by bit to the new card. I am unfamiliar with the Linux terminal and would like to know what command or a how to get the .img file written to it. The .img file is now residing on the desktop.
I'm trying to convert some of my old Pidgin-logs to Empathy's format, and I've run into an issue I can't figure out. The Empathy logfiles contain a "token", which is a sha1-hash of the user's avatar at the moment the message was sent.The avatars are stored (in my case) in ~/.cache/telepathy/avatars/butterfly/msn. All filenames contain the sha1-hash, and have no extension (such as jpg/png). Some avatars uses only the hash as filename, but some has the prefix "_3" before the hash.Does anyone know why some avatars are prefixed "_3"?
I've been advised to "clear the java cache", and to do so under Windows, one would open the java control panel. Of course, I'm using OpenJDK, and not Java's java.
Can anyone tell me how I would clear the java cache?
I am running ubuntu 10.10 server edition as apt-cacher server. How ever we have both 10.04 and 10.10 editions in our office. I have imported the cd cache of ubuntu version 10.04 and it is working fine. Now i want to import the cd cache of ubuntu version 10.10, for that i have to setup different server or i can import the cd cache in same server in different location? If in different location, how to import the cd cache?
This is a curiosity of mine, and I expect a technical answer, if someone knows it. Why the systems become so irresponsive when doing hard-disk input and output? This happens even if writing is done to a secondary disk where neither the system or the swap are stored.
I'm writing a script that unzips an archive, edits it, and zips it back up.
That being said, I've got it all figured out except for the zipping it back up.
Code: unzip file.zip -d /tmp/foobar ##regexes###
Now when i get to zipping the files back up, I don't want to include the /tmp/foobar directory I created. I want it to have the same tree as before.
I've gotten two commands to come close, but not what I want.
Code: zip -r /tmp/file.zip /tmp/foobar/*
- this zips up the folders and files within /tmp/foobar/ however, it includes the /tmp/foobar tree.
Code: zip -rj /tmp/file/zip /tmp/foobar/*
- this gets rid of the directories, however, it gets rid of ALL the directories, and doesn't preserve any within /tmp/foobar.
Now, by default zip will ignore /tmp/foobar if you're in that directory and running zip from there. But since this is a script I don't want to be limited to where I can run the script, or lower my standards to including changedirectory lines in my script. It seems to me like I should be able to 1-liner it.
I've used ext2ifs drivers to mount my ext3 partition in winxp, but I don't have write acces, it's mounted in a read-only mode, and i didn't check the rad-only box during the installation of the drivers. [URL] It's a straithfoward proces so I dont understand what I did wrong. I'm using fresh xp install with (more or less) all the updates and ubuntu 10.04 Also the partition is mounted at /home, so I dont know if that makes any difference.