I created a VM disk image with kvm-img, but I forget what was the max size of that disk image when I created it. Currently, its size is 6.2G, I want to install some large packages in that VM, so I want to make sure the disk image can expand to an adequate size.
I like ordering my images my date modified, but Eye of GNOME only lets you view them by alphabetical order.
Important features for me are: Going through items with the arrow keys. Zooming in and out with the mouse wheel. Being able to sort by modification date, type, or name. Being able to right click and open with either another window of the same viewer, or with another viewer. Having a simple interface
So far, I've tried:
Eye of GNOME - I love how simple it is, and if it wasn't for this sorting problem I'd keep using it. Well, that and the fact that you can't right click and open an image in a separate Eye of GNOME window while continuing to scroll images in the current window.
gThumb - Damn. So close to being a winner. Can't pass images with the arrow keys or zoom in and out with the mouse wheel, but I can sort by modification date, it's simple, and can open another window of the same viewer. But those first two points are also important for me.
Fspot - A little too cluttered when opening a single image. I don't really need to see a top panel with the other images, even if it's nice. I can go through images with the arrow keys, and zoom in and out, but no sorting by modification date.
Shotwell - Shotwell's viewer is pretty fast and simple, however it has lots of flaws for me: can't sort by modification date, can't zoom in and out with mouse wheel, can't open an image in another window while viewing it. At least it's simple and I can navigate with the arrow keys.
I started using gThumb for viewing images and it's perfect except for one thing. Sometimes I have many pictures in a folder and I want to interrupt the slide show to browse the net or send an email. The problem is that after pressing F5, the slideshow reverts to the first picture in the set and I've got to find where I was up to, which can be very time consuming. Since gThumb is now at version 2.12.0 it's absolutely certain there's a setting for this but, try as I might, I can't find it.
When I am deleting pictures using gThumb image viewer it asks "The selected images will be moved to Trash, are you sure?" And if I press "yes" button - it moves message to ~/.Trash, can it be configured to move them into "real" trash? I have created symbolic link and it solved part of my problems, but files "restore" option is unavailable to files which were moved in to trash by this method.
When I try to delete a file, ( move to trash ) It says , The trash has reached It's maximum size! clean the trash manually. When I click on the trash icon on desktop it is empty. Where is the trash? Where can I delete these files ?
This seems like a relatively simple question, but the answer seems to elude everyone: What is the MAXIMUM SIZE of a Linux loopback device (not counting any specific filesystem limitations)? Is it the maximum size of a linux block device?
Is there any maximum limit to the heap memory allocation?My program is in PERL and i am using a solaris system. when i did "pmap pid" (pid = my process id) it is showing a number of heap memory allocated and all of them with GB sizes. This single process is eating up most of the physical memory.Is it normal and is there any way to get the heap memory size down
Ubuntu 10.04, xsane 0.996, Brother MFC 240c scanner.I just finished writing a long dissertation on my problem with this scanning environment (which I will spare you). In a nutshell the resulting image, when printed, is smaller than the original document. In writing my dissertation for this post I determined that the cause of the issue is that xsane believes I am scanning an 8.5 x 14 inch document when I am in fact scanning an 8.5 x 11 letter. So the question is... can I change the size to 8.5 x 11? and if so, how? I have not found anything in the xsane Preferences.
First off all, I'm booting from a large MEMDISK using PXE (900MB) . Due to our environment, I cannot decrease the size, nor move files to a nfs/iSCSI/... environment. Everything needs to be in that MEMDISK.
Now, when I try to run the OS, I get out of vmalloc space. How do I increase it to a number which allows such a large image to be mapped? I tried the parameter "vmalloc=1280M", but with that parameter, I don't get past the Booting the kernel screen.
Memory should not be an issue, since the machine(s) have at least 2GB RAM. (900MB MEMDISK + 256MB for other kernel stuff + 768MB for user stuff). The machine(s) have a Pentium 4 Extreme Edition processor, with hyperthreading and SSE2, but no EM64T.
How can I boot the system, and get past that message? Decreasing the MEMDISK size is not possible too. It is at the smallest we can get with our userland + kernel + modules.
How do I change the size of the available shared memory on Linux?evidently 4GB is not enough for what I am doing (I need to load a lot of data into shared memory - my machine got 8GB of RAM).
I'm using fc14 and the SG driver to test some SCSI (SAS) targets. In doing so, I'm bumping up against what appears to be a 512KB maximum transfer size per command. Transfers up to 4MB sometimes work, but often they result in ENOMEM or EINVAL returned from the write() function in the SG driver. I could not find any good documentation on how the SCSI system in Linux works so I've been studying the source for drivers in drivers/scsi.
I see that there is a scsi_device struct that contains a request_queue struct that contains a queue_limits struct that contains an element called max_sectors. The SG driver seems to use this to limit the size of the reserve buffer it is willing to create. I see that there are several constants used to initialize max_sectors to 1024 which would result in the 512KB limit I see (with targets having 512 byte sectors). At this point I have several questions:
1) When the open() function for the sg driver gets called, who initializes the scsi_device struct with the default values?
2) Can I merely change the limits struct to arbitrary values after initialization and cause the SG ioctls to set the reserve buffer to allow greater values?......
I'm wondering what's the maximum heap space a process can use (not necessarily by a single malloc()) in Ubuntu x86_64. Which parameter determines the size
The aim of this script is, when the folder reaches 20M then attributes will be set to that particular folder so that no newfiles and folders cannot be created or copied to that samplefolder. whenever i copy a file morethan 20M to that folder its getting copied fully and then the attributes were applied. But i dont want this to happen, when the folder reaches its maximum current write operation to that folder should be stopped automatically with a error.
I have set up squid server. My cache directory has been set up as per following statements.cache_dir ufs /Cache1/squid 10000 16 256cache_dir ufs /Cache2/squid 10000 16 256Now the problem is that size of /Cache1 and /Cache2 has reached to about 8GB and in near future it will reach the maximum limit of 10GB. I just want to know that whether I need to delete the contents of these directories or otherwise.
Slackware includes gqview, a pictures viewer not maintained since 2006. Geeqie is a fork of gqview and the first stable version is out since about 1 month, and it seems to be stable and good. So what about remove gqview and add geeqie in next slackware ?
i m using ubuntu 10.4 version and i want to knw that is their any software or something else so that i can resize my images from 2 mb to in kb as my image folder is of 614 mb and i want to put that folder images in my mobile so how can i reduce the size of that folder
After screwing up an update to Ubuntu 11.04 I decided to do a clean install. I tried downloading the AMD64 DVD image of 11.04 but I have found that some of the files cannot be downloaded and appear to have bad file size. In several mirrors and repositories I found the image size to be only 46.1 MB! (Yes, thats "Mega"-bytes, not Giga-bytes. I ftp'd to the repositories/mirror sites and confirmed this.) Yet in many of the HTTP pages it shows as 4.0 GB.I can't believe that the true size is 46.1 MB as the i386 DVD image is over 3 GB. 4.0 GB sound right, but doesn't match the actual file size. So, how long until it gets updated?
I need a command that will check the size (visual size, not file size) of every image in a folder and its sub folders, and make a copy (or even better, a hard link) of that file in a second directory if the image is larger than 1920x1080 pixels (in both dimensions, not just total area). Also, lots of these file names have spaces, so the command needs to be space-tolerant
I'm guessing I would need to use one of the imagemagick commands and find, but I'm not sure where to start. I'm still reading man pages, but I thought someone here might save me some time.
I've been working on getting another OS installed on my computer for one of my classes (OS specific assembly instructions). To get this OS running, I had to start using a GPT rather than a MBR table. I backed up my Ubuntu partition (ext4) using the old-fashioned dd command. I've since been able to get everything working again after a dd restore.
The problem is that my original Ubuntu partition was only about 50GB and the dd image only takes up 40 GB. After I restored the image to the new drive (146BG), gparted is reporting 119GB used and only 26GB free. What can I do to reduce the size of my install to 40GB again?
When looking at the disk in baobab, it says the the filesystem is only 47.2 GB and that only 20.9 GB has been used. This is likely what the old partition's breakdown was. So my new question is: How can I make the filesystem capacity (47.2 GB) equal that of the partition that it is on (146 GB)?
I've tried installing the BBC iplayer desktop, and am currently working my way through the maze of adobe air installer etc.
I wondered if it is possible to view the downloaded programs in a different viewer. I've got both Movie Player and VLC media player installed, but they will both only play for 1/2 second, then the program closes.
Is it possible, or must I persevere with BBC desktop iplayer ?