Under windows XP o.s., running Windows Internet Explorer 8 (or 7) I can save a web page in tree different ways: as an HTM(L) file and some secondary files such as GIF files in a separate directory, as a single XML file and as a plain text file. More precisely, I'm seeing a web page on the screen. Then I do File>Save As. A dialog box diplays and I only have to choose the format and click <OK>.
In linux, I use one of three web browsers: Konqueror, firefox or Seamonkey, all of them under KDE (I lack Gnome). So, is there a choice, when in linux, similar to the one stated above?
I used wget -r to get all the web pages that were linked from index.html. The pages listed in index.html are all chapters. After using wget -r, all the chapters are now in the same folder on my local hard drive. Is there a way to build the chapters in their proper order into a "long"/"full" web page, rather than simply having each chapter as a link/next link on a previous page?
what is the best way (i.e standard way that is supported on all browsers and probably as well followed by web crawlers).... to include an html file either locally or externally in another ? Of course , i've done the research and i also know that there are server side includes (php , asp ...you name it) at the moment , i'm using this:
Quote:
<script type="text/javascript" src="path to file/include-file.js"> </script>
however, i've been warned that this method may not show up in some browsers as some tend to ignore this tag and that crawlers like your favorite search engine wouldn't bother reading this. so , what is the best and safest way to do the job? and btw , the reason why i've ousted SSI's from the start is because of among other things:
1) the fact that the included file is static html and because the text is included pretty much everywhere
2) hoping to reduce load time as the code (if successfully recognized) would hopefully be treated like any other embedded external file (e.x like an image) , therefore it would be cached without the need to downloaded it over and over again for each new page on the site.
i have put a zip folder at /var/www/html/ am trying to download it on a client machine it gives me this error You don't have permission to access /db_airarabia_crp.zip on this server. i changes the permission on the file to 666 but still its the same
info is the dufault document format. I am very like to read documents in this format.But, sometimes there is some articles or books are composed in html format.I wonder if there is some way to convert the html documents to the info ones.
I created a local user acount and tested FTP. This allows me to post files to this directory using filezilla. I then created a webftpaccount and set the home directory to /var/www/html. Here are the permission to this directory using ls -l drwxrwsr-x 6 webftpaccount webftpaccount 4096 Nov 23 10:32 htmlhere are the permission on the sub directories
drwxrwsr-x 2 webftpaccount ftp 4096 Nov 14 07:37 myfinanceguard drwxrwsr-x 2 webftpaccount root 4096 Nov 14 07:37 mylawguard drwxrwsr-x 2 webftpaccount root 4096 Nov 14 07:36 xpiinc
I can log into the webftpaccount using filezilla client and it lists all the directories.It will not allow me to write a file into the html directory or any of the sub directories.Can someone help me set appropriate permissions on these directories so that I can get this working? I need to get FTP working so I set up dreamwaever FTP tlich and maintain sites.
I want to use an utility to convert word,pdf to html with same formatting. that utility able to run in commandline also. i want to integrate it in web page. so which utility is suitable for it
I have written a scripts that checks the load average of server and if it is more than 5 it send a mail describing Current Load Average and High CPU/RAM processes . The problem is I want to send these information in html form .I have done necessary coding to do the same but whenever i try to include the output of following It doesnt seems to be properly formatted.
I need to convert a very large latex project (made up of many .tex and style files) into .html (or something similarly non-.pdf). Can someone recommend a quality converter program? Preferably, one that is:
I've been playing around with sed but can't find a way to remove the <br> html tag and replace it with a newline. Sed isn't truly needed awk or other suggestions could be good.
I have fedora 12 and I'm playing a game using wine. To view a page from inside the games own browser(it has an internet browser style menu) there is a button that says "open in external browser" I've used it hundreds of times to save that particular page to a folder when I was in windows but now that I'm using fedora it doesnt do anything. Is this something I have to configure in firefox or could it be something else.
I'm a frequent user of grep. I know that I can recursively search a directory using the -r flag: Code: // will recursively search all files grep -r 'some string' *
However, if I want to limit my search to PHP files, the -r flag is suddenly useless: Code: // for some reason, this only searches the PHP files in the current dir grep -r 'some string' *.php
Any good way to recursively search a directory and its subdirs for a string but ONLY look at PHP or HTML files (and possibly TXT files too) ? I'm really hoping for a nice, short command that doesn't involve using an exclude file and which isn't really painful to type. I do this kind of search very frequently and have resorted to either searching EVERY file which is really slow (TAR and ZIP files really slow it down) OR typing repeated commands to search *.php, */*.php, etc.
I have indexnew.html file in /var/www/html. I have to view this file in the browser within the network and without using Apache server. Because, my Apache server gets the request to my application. I used http://localhost/indexnew.html to open the file, but it gets to my application.
How can rich text or HTML source code be obtained from the X clipboard? For example, if you copy some text from a web browser and paste it into kompozer, it pastes as HTML, with links etc. preserved. However, xclip -o for the same selection just outputs plain text. I'd like to pull the HTML out and into a text editor.
I'm looking for an existing linux distro with a pure HTML(5) client interface. Sort of like m0n0wall, but then featuring all functionality a modern OS includes, implemented in HTML/Javascript/CSS. Kind of a Skylight clone, but then using only free and OSS software.
I'm building a file server using Ubuntu. I want to setup an HTML based file upload/download system where I can create accounts for a few users and allow each user access certain folders. So that the user would open it like a webpage using a web browser and then uploads and downloads files, not needing special setup for the computer. Is there a ready made solution for that purpose? Will I have to code it?
I've got some trouble while trying to install some applications on my linux system. It is said that the files in my /var/www/html/xxx directory, where I put them, is not writeable. The command chmod 777 xxx has been tried to make it work, but the error remains when I opened the applications again.
To be specific, I want to install phpFreeChat on my system, so I put those files in the /var/www/html/freechat directory, cd there and typed chmod 777 data/private, chmod 777 data/public on bash. Here's the result of list -al data:
drwxr-xr-x. 4 root root 4096 Jun 17 15:07 . drwxr-xr-x. 13 root root 4096 Jun 17 15:22 .. drwxrwxrwx. 2 root root 4096 Jun 17 15:07 private drwxrwxrwx. 3 root root 4096 Jun 17 15:07 public
These all seemed all right to me, until I typed http://localhost/freechat in my browser. Here's the result:
phpFreeChat cannot be initialized, please correct these errors: /var/www/html/freechat/src/../data/private is not writeable
I have a website that has a massive list of royalty free stock photos and I want to download all of them. I have bought a membership for [URL] so I am able to download as much as I want from them for the next month.
Instead of going page by page and manually downloading each set of stock photos manually, I would like to automate this process. Here's my idea:
1. Download the website with the links to hotfile [URL]
2. Use grep to retrieve all the links to [URL]
3. Feed the links I recieve from grep into wget and download the works of them.
The problem I'm getting is when I use grep, It retrieves the entire line of html code where "hotfile.com" is shown. So here is an example of one link I receive in the output:
Kernel 2.6.21.5, Slackware 12.0 A command line html reader, or a conversion tool from html to text is what I would like to know if any of you guys knows. It has not to do a perfect job. And it would be nice if it is a native unix/linux program.
I have many years of experience with DOS and Windows, but this is my first dabble in Linux, in particular Fedora 13. The OS is great, but my lack of knowledge makes me uneasy. Is there a good book available in HTML or PDF format that covers. The basics, and is relevant to Fedora 13?