Ubuntu :: Create An Index In A Pdf File That Has No Security Restrictions On It?
Feb 16, 2011
Is there a program available that would allow me to create an index in a pdf file that has no security restrictions on it? I know people can lock there files so I am not worried about thise but if I have open permissions on a pdf file how do I go about creating an index. It seems that by default you get the thumbnail view but I like to be able to click on a index list to go to a page.
I need a script that can do this: A script that searches all directories and subdirectories for .html files When a .html file is found it creates a index.html file in that folder. It then edits the index.html file and inserts links to all of the .html files that are in that folder into the body. If no .html files are found, it searches for folders. It then creates a index.html file with links to all of the folders.
I have a user that has already used up a demo 24hr trial on my website. At present, I only check the customer id and the IP address to search for duplicates. On the whole this works but it's not foolproof. We now have 1 user from China that is changing their IP address everyday to get access to the free trial. Any options on what to do? I thought of downloading a cookie to their computer that the website could pick up - again not foolproff but most people don't disable cookies. Any other options?
I could ban China temporarily until the user gives up but if they find another proxy to chain then their IP address will be different again.
How to create a user account on a Linux desktop machine with restrictions on connecting to the LAN, WAN, PCMCIA ports, Firewire, CDROM and generally any user controllable output options?
I have the task to set up a machine for users working with sensitive data that should not be leaving the machine where it is processed. This means disabling access to the ethernet device, lan, all other ports as mentioned earlier, and any other way of leaking the data.
In Mac OSX this was achieved using "Parental controls" from the System preferences; this even allows a selection of the applications that can be used. Under XP, Device Manager offers the option to click various devices and "Disable" them, which worked so far just fine. Some will point out that the latter mentioned OS may be easy to circumvent the security of in other ways, but that has been mitigated with other measures and it's not the point anyway. For the operator users in question, the aforementioned measure proved successful and worked.Using OSX and XP to do this was a 10-15 minutes job with testing included.
So far all guides and tutorials pointed to useradd, groups an facl, but in actual practical terms did not help at all, in fact most of the research did not render any practical results so far. I surely don't expect to point and click, and would gladly run a set of commands from CLI. If I had them. I would really would like to achieve the same restricted user account configuration in a concise, comprehensive and practical manner under Linux too. Preferably tested on humans before, and known to be workign, of course. The machines that need to be set up are two laptops running Ubuntu. So how can this be accomplished in Linux?
I usually use .htaccess to restrict access to directories. But what if I just wanted to secure a single php file? Is there some sort of code that would allow me to say ONLY THIS IP can access this PHP file?
I've got a perl script which generates an html file. What I'd really like to do is send people to just one page (the perl page) which then generates the html and shows the html page in question. Thus the whole transition will be invisible to the user, they'll just have an html page.FYI the perl script generates a list of the users who last logged in to the system, it needs to be up to date (by the second).Let me know if this isn't clear, I understand it sounds a bit confusing...
I noticed that Ubuntu takes some time in building the file search index. It seems to do it overnight while my PC is powered down )Is there a way to build the index by a command so that I could do searches whenever I wanted, specially immediately after installing a new program?
I use nautilus at the moment on Ubuntu 10.10 Maverick. I use grep cli command to locate text words or comments in a folder containing hundreds of files, each having 5 or 6 text documents in Greek and English.
I am looking for a manager that will index all entries, and enable me to find a phrase or word within these documents quickly. I seen to recall that some time ago I saw a manager that was capable of this, that generated an index, as data was added, but have been unable to locate it with google.
i have a weird issue with my web server. every time i enter my domain name instead of processing my index file it downloads it. wth could be the problem first time this ever occurred. recently installed webmin.
I am running debian lenny running apache (latest) i am trying to make a webserver. i put the index.html file in my www folder. it shows up when i go to my site (localhost) i put an index.php file in my www folder and when i try to go to my site it Downloads the index.php file instead of showing it.
I have my home network up and running. I can access my index file by the ip I gave to eth0 - 192.168.0.2
I put this same ip into etc/hosts: 192.168.0.2 localhost localhost.localdomain One space separates the hostnames from the ip in /etc/hosts. The hostname command returns: localhost.localdomain
I have /etc/resolv.conf configured the right way, so I can use my isp's DNS servers. I know this because yum works and I can ping anywhere by url. The next step is to learn to access the index file by url locally on my own network.
I have a set of PDFs I'd like to index and made searchable. Most importantly, I'd like a web interface so I can search and access remotely. This morning I researched into several different options, but all seemed to fail.
Google Desktop Search - No good because web interface is only to the local host Beagle - No good because it is no longer support and there are executation errors when running in Maverick Recoll - No web interface Strigi - No web interface(?)
I recall using one about 2 years ago that was deployed on Apache Tomcat, but I can't remember the name.
My setup is two laptops connected by crossover cable. One runs Windows xp the other Fedora 13. Neither is connected to internet. I'm using a subnet of 192.168.1.x On Fedora eth0 is up, apache runs because it creates pidfile. Everything pings fine. Windows xp ip pings fine from command line. Gave Windows xp a static ip of 192.168.1.1 mask 255.255.255.0 and gateway 1.2, same as eth0. xp says it sees the server. eth0 is up.
DirectoryIndex looks at index.html. I created that file with very simple code and put it in document root. Document root permissions are 755. Access_log 770. Error_log 644. Apache User 755. Listen 80 When I type the ip for eth0 (192.168.1.2) into firefox, firefox gives me an error message - can't find server. The connection status says its connected.
The error log includes a line: [warn]./mod_dnssd.c:No services found to register I don't know what this means. Apache is not writing to access_log. When I cat the path to access_log I get nothing, then a command prompt. I'm looking for the part I'm missing that will let Apache serve that index.html file to firefox so I can see how my code looks to firefox as I go.
I am trying to install flash-plugin-10.0.45.2-release.i386.rpm so I inputed the folowing command and got the following error output:
Code: bash-3.1$ rpm -i *.rpm error: cannot open Basenames index using db3 - No such file or directory (2) error: cannot open Providename index using db3 - No such file or directory (2) error: cannot open Conflictname index using db3 - No such file or directory (2) error: Failed dependencies:
Dear all I am new to git (and storing multiple copies of my code).
I was using git for one month (git status, git add, git commit) to store my files.
I have problems to add more files to my branch
git add BasisExpansion.R fatal: Unable to write new index file
this was working great unless the day my system administrator destroyed my main partition (it was an accident) and installed everything from scratch. My home directory stayed intact (I did not lose any files not git files that are stored under /home). But from this point of time git returns always that error. My system administrator creater the same user and group as my old user was but this didnot fix the problem.
I am not sure If I can recover my old git structure or would it be better to wipe out the old git dir and start a new one?
Can anyone point me to already built and mostly working utilities that will catalog the contents of (external) USB and flash media in a way that they might be searched?With cheap (under $100 US) terabyte external drives, it is too easy to get another drive and fill it. I'm looking for utilities or applications that will help me know what I have, cull the duplicates, and avoid the need to spin a drive just to see if it holds what I seek.
Years ago there were utilities that would read the contents of diskette media and create a printable "index" or "catalog" page. The pages were conveniently sized to match the diskette so that one could store the page in the diskette sleeve for future reference.Later, the pages got replaced with applications that stored a disketter ID along with each file name. One could then search for a name, or pattern, and discover which diskette held the file(s) of interest. Today we don't have diskette sleeves, but we do have cases for our external USB drives and wallets for our flash media. A printed index would be nice to have.
I played a bit with the upgrade configuration files, to solve a couple of problems I have with Ubuntu 10.04, now when the upgrade procedure starts I getQuote:Failed to fetch Release Unable to find expected entry multiversdeb/source/Sources in Meta-index file (malformed Release file?)Some index files failed to download, they have been ignored, or old ones used instead.
I'm trying to upgrade from 8.04 to 9.10 via 8.10 etc. When I run update manager, I get this:
W:Failed to fetch [URL] Unable to find expected entry universal/binary-i386/Packages in Meta-index file (malformed Release file?), E:Some index files failed to download, they have been ignored, or old ones used instead.
and then it closes. I've unselected all the third party packages and tried various servers, but no difference. I can wget the file, but when I look at it I see entries for "universe/binary-i386/Release", but nothing for "universal" .
am writing a small search program for my class. I have decided to use indexing for my program. Ive researched online about indexing and how search engines do it. If im gonno do that I need to create inverted files to associate files to numbers ( numbers being the index of my paths ) . Now I was wondering what would be the best way to create an inverted file ? I was going to create sql tables using mysql api in C but then again there is no array data type or vectors to store few numbers in a single column in mysql and it is not advised to use Enum or SET
I backed up .themes from /home, but on trying to install them (files don't show as theme packages) it says "There was an error installing the selected file, index.theme doesn't appear to be a valid theme". Did I backup the right thing?
My application Help/Hilfe does not show anything. The following messages does appear instead:> Unable to load page> The requested URI "file:///fakefile#index" is invalidWhere do I get an index?I have jet reinstalled susehelp_en and susehelp_de but with no success. I use (normally) OpenSuse/SUSE LinuxRelease 11.1Kernel Linux 126.96.36.199-0.1-pae Gnome 2.24.1