Ubuntu :: Wget Login Form Sets Cookie With Javascript/.
May 6, 2010
Is there any way to get wget to work with a login form that sets a cookie with a javascript function before submission? I know wget cannot call javascript functions but if there was a way to get it to set the cookie instead
Here is the form tag:
<form name="form" method="post" action="https://www.yourfavouritesite.com/login.asp" onsubmit="return SetCookie()">
I m trying to access a site through a perl script for a project of mine, and i use a system call fora wget.
The login form is this
Code:
I mean should i add in the --post-data all the hidden fields? should i try using perl's md5 function for the last two fields? anyone has any idea on what are the elements i should be sending along --post-data?
Is there a way to --load-cookies from mozilla or something similar instead of creating new cookies with wget?
I'm looking for a program that will add cookies to my web browser (firefox or chrome, preferably). The application would ask me for the Cookie's:Name
Content Domain Path Send For (type of connection) Expiration Date
If no one knows of an application that does this, does anyone know what files i could manually edit in order to change a particular browser's cookie values?
I've already tried Seamonkey to create a web page but can find no way to create a web form in which I want to create form fields. Before moving to Ubuntu I used Microsoft FrontPage to create web pages with form fields. This was easy to do. what is available to do the same in Ubuntu?
I'm trying adjust my proftpd server's settings, that anonymous users could download what they need smoothly.
A small problem made me so bemused:
In the configuration file of proftpd, I place the following setting section in the <anonymous> section,
Code:
After restarting the proftpd server and applying the configuration, I try downloading a file in IE browser. Sometimes, it prompts a saveas dialog, and everything was okay.
However, it occasionally prompts a login form instead of a SaveAs dialog. This makes our customers confused greatly.
So, how could I prevent browser from prompting login form when anonymous users try to download files from our ftp server?
I need a textbrowser with Javascript Support. I have compiled the elinks stable 0.11x and the unstable 0.12x version mit JavaScript Support (js moz dev). Both browsers can be started and showed websites normaly. But no javascript, even simple tests like [URL] failed. I started elink with ./elink and typed then the url.
We are using several printers on our Linux RH network to print customer invoices and receipts. Receipts are short forms of just 21 or 22 lines. Two of the printers (an HP LJ1300 and a Dell 5200) eject the receipt paper automatically; the other two HP (a LJ 4200 and a LJ2420) do not eject. You have to press the green button on the printer. Is there a solution to that? They are all set up with the same PCL settings.
If a wget download is interrupted (like if I have to shutdown prematurely), I get a wget.log with the partial download. How can I later resume the download using the data in wget.log? I have searched high and low (including wget manual) and cannot find how to do this. Is it so obvious that I did not see it? The wget -c option with the wget.log as the argument to the -c option does not work. What I do do is open the wget.log and copy the URL and then paste it into the command line and do another wget. This works but the download is started from the beginning, which means nothing in the wget.log is used.
i would like to know whether it is possible to pass firefox a single cookie generated by some other program (in this case wget), doing it from the command line. my concrete use for this is to login to a site (connect.garmin.com) with wget, saving the cookie with --save-cookies --keep-session-cookies, pass this cookie to curl, do some uploads and then pass the cookie to firefox to open the page and have me logged in automatically (everything besides firefox is sorted out). google searches and the firefox man page suggest that firefox doesn't have this functionallity, but would it in that case be possible to link the cookie into firefox's normal cookie jar?
I would like Chromium to discard all cookies at the end of the session except for specified domains. I thought I had it set up right, but it doesn't remember me on either ubuntuforums or facebook.
Does anyone know if I'm missing something or if this is a bug? Screenshot of configure attached.
Running Meerkat on Sony Vaio laptop, with Firefox as the browser of choice.The BBC News website has a "My Location" setting, and shares it automatically with the BBC Weather page for a local forecast.Is it possible to clear the Browsing History, yet save the one individual cookie that is used for this ?
I'm setting a cookie value using RewriteRule statements in .htaccess for checking if my site should not redirect from the main desktop site to the mobile site (when accessed via iphone browser). The problem occurs when the first RewriteRule sets "mobile=desktop" in the cookie (from detecting "?desktop=true" in the querystring), the third RewriteRule fails to detect the new cookie value and redirect. If I reload the page again (without any querystring for "?desktop=true"), then the third RewriteRule redirects successfully.
Can anyone tell me why the third RewriteRule fails to detect the cookie value straight after it is set in the first rule? How can I fix this to work properly?
I can open the stream fine from Firefox and I have recorded streams using VLC, however this one uses cookies as authentication and VLC will not play it Apparently it's considered a security issue so cookie support has not been implemented for VLC.
When I reboot my computer, my iptables sets itself to a policy of dropping everything, adds a bunch of rules, and a bunch of extra chains, to the effect that (due to everything being set to drop) I can't do anything. I know how to fix this from the terminal to the extent of just clearing most of it and changing the policies back. However, what I don't know is how to make it stay that way. I have a file with the iptables rules I want, so every time I start up I just run iptables-restore, but I don't want to have to do this every time, particularly since others use this computer who do not have admin privileges.
I've tried changing /etc/network/interfaces with the added code pre-up iptables-restore < (etc) But that never does anything, or if it does it just makes stuff work even less. I've tried changing init.d before based on similar info elsewhere, still no luck. I don't know how to get it to stick, and I don't know why it is defaulting to the rules it is, other than that I used a firewall app a while ago and afterwards this was the result, for which I uninstalled that app after no success using it to reverse the damage.
I'm looking for a version of, or equialent to, Predator for Ubuntu 10.10. This program sets a USB drive as a key to the computer, locking it when removed. I've had no luck in finding an equivalent for Ubuntu, and the site does not have a version for Linux of any kind.
Basically, after installing gnome-desktop-environment, and then removing empathy (as I don't like it, and don't want to see it in my menu, don't want it taking up space, etc) results in all of the below packages queued for autoremove:
I want to know how the computer sets time? When I turn off my computer or it is not connected to internet it shows me the correct time. When I login to my computer after so many days, it shows me the correct time though it was turned off for so many days. How is it possible?
What I am meaning is, are instructions sets a way of signaling commands or optimizations? I mean because we have binary right? (to do simple things like commands right?)
I have centos 5, and apache. I've recently had to make a website in Polish, which has some characters that do not seem to be supported on Centos. Is there a way to install more character sets?
I don't start a network connection when my machine boots FC11 64 bit. When I start the connection and the Firefox browser, I always find that Firefox has set itself to work offline. I use File->work offline and uncheck that box. This fixes the problem, but only till the next reboot. Even if I exit Firefox "in an orderly manner", the next time it will have set itself to work offline.
I had this problem in previous Fedora versions. I read about two solutions on the web. One was to write a script that removed a file called extensions.cache before Firefox ran. The other was to type about:config in the address bar of Firefox, acknowledge the warning message, and use the right mouse button to togglethe value ofoolkit.networkmanager.disable from "false" to "true".Are either of these good solutions? What exactly does "network manager" do? I think of it as the gui under System->Administration->Network, but it must be more.
When I installed Slackware, I deselected a few software sets like KDE and TCL, but now I would like to install KDE (btw, what is TCL for? Is it something important that I should have installed, or is it something like "if you don't know, you don't need it"?). What would be the best way to install the KDE software set and even have a menu to deselect a few packages I know I won't use?produces a list of kde packages (including the kde language packages), but I don't know if that's everything in the kde set. It seems like it's not, since it's transcending the software set boundaries to provide options for the internationalization packages, too.
I download DVDRip's, and as you know they have a small resolution.Now what i want is for the subtitles to show in the black part below the movie/picture, and not in the movie
Have you noticed the space "background set" that is in Ubuntu from the 9.04?It is an xml file (as I discovered on this beautiful forum ).But creating it manually is very boring and long, it would be simpler to put wallpapers into a directory and use them all as background set. So I've created a simple script that generate the xml background file with the images in a directory. You can set the timetion of wallpapers and of transitions setting the proper parameters.
here's the link to the script:generateXMLBackground.shHow to use it?Create a directory and put into it all images you want to be in the background setOpen a terminal [Applications->Accessories->Terminal]Type "cd " and drag into the terminal the directory previously created and press enterDrag into the terminalhe script generateXMLBackground.sh and press enter [you can use parameters -t -d. For help use the parameter -h]The xml file has been created!!Now you only have to set it as background:in the "Change desktop background" window click "Add", navigate to the directory created at the beginning, set the type filtre to "All files" (at right bottom, normally it is 'Images') and choose the .xml file named #DIRNAME#Background.xml.Note:not run the script generateXMLBackground.sh you have not rights type into the terminal "chmod +x " drag the script and press enter (to make it executable).
I have installed HP g6 notebook from live-cd with 10.04 LTS across multiple partitions, only to find that the partition table is not setup correctly. I place the following mount points on separate partitions:
Two part question: 1. what tools are recommended for designing web pages? 2. What tool sets are recommended for maintaining them?
I suspect that the first question really addresses the second rather than the literal question because of the source of the request. Here's the environment. A small church wants to post and maintain a website. Various non-tech persons will be responsible for maintaining much of the content of the site. This tells me that they want/need a site that contains the necessary content maintenance tools within the site itself, not a tool on the individual desktops with the only real need for the design/dev tools being for initial construction of the site and a GOOD book on site design to guide construction of the site in the first place.
I have recently installed Slackware 13 (64bit) (Kernel 2.6.29.6) on an Acer Veriton M series with AMD Athlon cpu. I have started ntpd server and from it's log file it seems to be synchronising to each of the three stratum 2 servers specified in the ntp.conf file.
25 Mar 18:27:02 ntpd[4542]: time reset +2.163491 s 25 Mar 18:33:08 ntpd[4542]: synchronized to 130.88.200.4, stratum 2 25 Mar 18:37:25 ntpd[4542]: synchronized to 130.88.200.6, stratum 2 25 Mar 18:42:59 ntpd[4542]: time reset +2.282097 s 25 Mar 18:49:21 ntpd[4542]: synchronized to 130.88.200.6, stratum 2 25 Mar 18:50:17 ntpd[4542]: synchronized to 130.159.196.118, stratum 2 25 Mar 18:55:57 ntpd[4542]: synchronized to 130.88.200.6, stratum 2 25 Mar 18:58:57 ntpd[4542]: time reset +2.147508 s
If I invoke ntpdate from another linux machine, that ntpdate reports that the stratum of the ntpd server is too high and won't use it.
transmit(192.168.175.8) receive(192.168.175.8) transmit(192.168.175.8) receive(192.168.175.8) transmit(192.168.175.8) 192.168.175.8: Server dropped: strata too high server 192.168.175.8, port 123 stratum 16, precision -20, leap 11, trust 000 refid [192.168.175.8], delay 0.02577, dispersion 0.00000 transmitted 4, in filter 4 reference time: cf562778.a310c705 Thu, Mar 25 2010 18:18:32.636 originate timestamp: cf562cdd.048cbee3 Thu, Mar 25 2010 18:41:33.017 transmit timestamp: cf562cd0.56618ce2 Thu, Mar 25 2010 18:41:20.337 filter delay: 0.02579 0.02577 0.02579 0.02579 0.00000 0.00000 0.00000 0.00000 filter offset: 12.68024 12.68024 12.68024 12.68024 0.000000 0.000000 0.000000 0.000000 delay 0.02577, dispersion 0.00000 offset 12.680248
25 Mar 18:41:20 ntpdate[9683]: no server suitable for synchronization found
I have run ntpd on another platform with Slackware 12 (32bit) and never had any problems.