Is there any scanner that can scan multiple pages at a time? Without human intervention. Then put all the scanned pages in 1 tiff file or multiple gif or png or jpg files. A scanner that works with Linux. I need make and model.
I'm looking for an easy solution (1 click install) to scan multiple pages to one pdf file on openSUSE 11.2 with gnome.Most tools only support 1 page tobe scanned to pdf.gscan2pdf looks like a solution but I can't get it running on 11.2.(missing dependencies e.g. perl-forks)
Notwithstanding that I might not want to do this because of the large file size, how would I scan a multi-page document into one image file (probably a jpeg)? The only way I know how is to, after making seperate jpegs of every page, manually pasting the images below each other in a photo editor, which of course is a lot of trouble.
I installed squeeze on my eeepc 1015ped and downloaded the correct firmware-brcm80211 drivers but every time I scan for my network using iwconfig wlan0 scan or wicd, my computer completely freezes. I previously had a solid install running xmonad, and wicd was working like a charm (using the same broadcom driver) but i tinkered too much with it and decided to do a fresh install. I haven't quite run into a problem like this before.
My needs require the ability to scan multiple documents each day. Ever since 10.04 my HP J4580 All-in-One scanner operation has progressively become worse. With 10.10 it completely quit working. As is so often the case, the HP J4580 All-in-One works flawlessly in Windows so I know it is strictly a Ubuntu related problem. Do a search of my threads and you will see that I have receive very little help with this issue. that is another problem I have been having.I have reverted back to 9.10 and all my scanning issues are resolved. I have followed several threads offering solutions with no success. I know the issue is with the removal of xsane from Ubuntu. Versions after xsane 0.996 is when the problem surfaced.On my system (desktop) I have been greatly disappointed with the last 2 releases. Firefox has even started having random crashes in 10.10. I tire of spending so much time trying to correct these issues.I will not quit using Ubuntu but I will not be using the latest two releases for a while. 9.10 worked great and I will keep it until support ends. You get what you pay for so I will not blast Canonical or the software developers. I would hope that prior working hardware would not lose functionality with the release of a new OS version.
I have an interesting issue with simple scan. When I scan a single page document, the exposure is not very good because there are dark and light areas scattered over the entire document. Of course, one cannot adjust these settings in simple scan as one can with xsane. However, I noticed if I scanned a multiple page document, all pages after the first seemed to be more consistent and the exposure was fairly good (exposure here referring to all the aspects accessable in xsane such as brightness, contrast, gamma, etc.)
So I tried a little experiment. I scanned a single page document as if it were a multiple page document by scanning the document multiple times in the same session.The same thing occurs as if one were scanning a multiple document: the first page is mottled, but all pages after are much better scans. What is happening here?
I have a file which contains the data i retrieved through prstat and an array that contains all the unique process ID's of that particular file. i want to compare each and every line in the file with each and every element of the array so that i can create multiple files for the multiple value in the array.
I have a Lenny Raid-5 setup, and I use rsnapshot to occasionally backup the raid to an old external hard drive that I otherwise leave unplugged. That external went bad, so I figured I should setup its replacement with encryption and keep it off site when not in use.
While googling and coming across LUKS, I can across this post this post that mentioned the drive should be be filled with random data before setting it up for encryption, and had the following as a potential sufficiently-good option over the time-intensive dd /dev/random route: badblocks -c 10240 -s -w -t random -v /dev/sdx
I was unfamiliar with badblocks, and came across this post after a google session which noted the time duration of scan is an important factor as well as the result.
I sshed into the NAS, and was about to run badblocks first in read mode, then in write mode, but then I considered the time consideration. I was going to use something similar to the following: nohup badblocksstuff &
My Question: Is there a way I can append to the resulting nohup.out a time to complete value?
Are there any reliable ways to print libreoffice or openoffice drawings to multiple pages. I have two drawings I designed for 11" x 22" sheets, and I'd like to print one of them to 4x or at most 6x 8 1/2" x 11" sheets, without reduction.
So far, when I've tried this using "print to multiple sheets," the results have been erratic, and have basically wasted paper and ink with no usable results.
I can't even preview this.
I am using the old trick of creating a duplicate file, and creating for duplicate slides within it, making each slide one group, and using the alignment settings to make each slide match one corner of the original image. It works, but it is kinda clunky.
I'm looking for a program that will digitally display the time in three different cities - all showing at once. I don't care if it is a panel applet or stand-alone. I'm using Suse 11.3 and Gnome.
I have a large image that I want to print over 4 pages, each page showing 1/4 of the overall image, that I will past together. I'm doing this from GIMP on an up-to-date fc12 system. Searching around I find that there is a "scale" field in the print dialog and the lp command that cups supports and according to the documentation if I set scale to "200%" it should do what I want.However, when I set scale to 200% I get only one page with the upper left 1/4 of the page and then nothing. How do I get it to print the remaining 3 pages?
Unusual problem here -I'm posting in hardware as I'm not certain where it fits.Home laptop (mainly wife's in use) upgraded from 11.3x86 last week to 11.4x64, pretty much everything is awesome.However, when she attempts to print a PDF with multiple pages per sheet (study notes),
I've got a bunch of PDF files that have been produced two "real" pages to a single PDF page; I'd like to chop these in half and put each half on a separate page. Essentially, I need something thatdoes the exact opposite of pdfnup (or psnup). Google and apt-cache search are giving me no love.Platform is Linux, open source preferred; as I've got a great pile of these to do something that can be scripted (as opposed to a GUI) would be nice, so I can just give it a list of them and have it chew away.A pre-existing script isn't the only option, either; if there's sample code to manipulate PDFs in similar ways with a third-party library, I can probably hack it into doing what I want.
I' have FC10,firefox3.0.4it take ages to load any page.the d/l speed is good .but while loading pages it takes a lot of time!while on XP everything works fine even though the the firefox version being an earlier one than FC10
I am using KVM and created four guest Operating systems on it. The server host is Ubuntu 10.04.I am using 4 websites in a reverse proxy environment. One of our website is running on CentOS VM. Right now there is no traffic on the website static HTML pages. I do not have any clue as why it was taking longer time to be accessed.
How can I combine multiple single page prints into a single print job? For example, using Firefox on Linux one can print a web page such that each sheet of paper has four pages printed upon it. I would like to combine several separate web pages so that for example, web-page-a, web-page-b and web-page-c (each less than one print page long) are printed on a single sheet of paper.
I would like to do this without having to use some form of image editor to combine and manage manually created temporary files.
I have a Cent OS 5.4 32 bit final installed in my dedicated server.
I used to run lighttpd with php in my server until now and all was fine.But yesterday I changed my website which needs apache to run. So installed apache using yum install httpd command.
Then I added the virtual host name of my domain in webmin panel but when i try to run my php script in browser then its not opening php pages.
Instead it downloads php files like index.php when i open in browser.So I guess apache is not able to compile and run php pages. Only html pages are opening right now..
I'm in Gnome but if I logout I can get to a menu where I can choose KDE as well as other window managers. The problem is I have a program that's running inside Gnome and I don't want to stop it. Is there some way I can get into KDE without having to stop this program?
I've worked out how to get ALSA libasound code working for an application. However I can't workout how to get multiple sounds working at once. I can't seem to find any google examples of mutiple sounds. EG tune plays while pressing buttons makes clicks. I'm using an embedded board so need to use ALSA not an addon e.g. sdlmixer, jack, etc.
I am using a perl script which uses curl in the background to download files from rapidshare premium,one file at a time. I wanted to know is it possible to use curl and spawn multiple connection at a time,to download the same file in multiple parts?? I don't seem to find an option in curl which does that.
configure a server with two network interfaces? This system is physically moved from one network to another every few days (different buildings but connected by a VPN). I'd like to be able to control the IP address of the system depending on which port I plug the network cable into with a static setting. Right now the system will connect to the local network, but any requests to go beyond the subnet get lost. The only way I can get the system to talk outside of its subnet is to comment out the second interface.
I have 2 LAN ports. one from the motherboard (on board) and the other from a lan card i bought a few days back. one is use for browsing the net, the other for a media player.
problem is i cant connect to both the eth0 and eth1 at the same time. i have to disconnect one of them to connect to the other. and this really gets irritating as it doesnt always work as flawlessly as it should. what am i doing wrong?
I run a small home server (Debian 4), which acts as my gateway to the internet (ie, firewall) and runs a web server, dhcp, dns, and acts as a file server to the rest of the machines on my home network. Now I know it's never a smart idea to have all those services running on the same machine that is acting as a firewall, but I don't fancy running multiple servers just for home use, as it's mainly allowing me to learn system administration.
I noticed a few days ago that my internet had become unbearably slow, to the point where I could sometimes not load web pages. I spent a while searching through log files on my gateway, to try and find out what was eating up all of my bandwidth. When I came to apache's access.log file, I was confronted with this:
Multiple requests to my server, for totally random websites. I didn't even know it was possible to make those types of queries to a webserver. The only thing that is on the web server is a browser based torrent client. I have only shown a small snippet of the log file, but there are around 90k lines to different web addresses, from many different IPs. What I want to know, is what is happening? :S Why is someone querying MY web server, for web sites totally unrelated to it? And most of all, how can I stop it. My initial was to try and use iptables to block multiple requests from the same ip within a certain time frame, which I think would work as the server shouldn't really get many queries from external networks.