To make an rpc call I need to sent an xml file as post data.I know how to do this with wget. It works fine when I have the xml already filled in (depending on the node values the response from the call is different).owever I want to be able to edit part of this file, and then sent that as post data using wget. can edit this file using sed (I dont want to rewrite the files each time this gets used; and it does get used alot, with alot of different values).
I've been pulling my hair out trying to get wget to post data to a webpage to automatically download some files. I've tried many methods of syntax, but wget always downloads the html for the login page. A snippet of code I found in the login html page is below. Some of the characters are japanese, because it's a japanese website.
I would like to find out how I would use both curl and wget to sent an http post to get the hostnames of a few servers. I know am not even given any work of anything I have done, but the reason is that I am really lost, and I do not even know how to start it.
i am using Ubuntu 10.04 when i downloaded some thing using wget like wget [URL] where this page will get downloaded and second thing sudo apt-get install perl-doc i installed documentation for perl the same i have for postgreSQL... how to use these perl documentation in learning perl.
I'm trying to download all the data under this directory, using wget: [URL] I would like to achieve this using wget, and from what I've read it should be possible using the --recursive flag. Unfortunately, I've had no luck so far. The only files that get downloaded are robots.txt and index.html (which doesn't actually exist on the server), but wget does not follow any of the links on the directory list. The code I've been using is: Code: wget -r *ttp://gd2.mlb.***/components/game/mlb/year_2010/
I'm trying to use wget to retrieve some data from our tape backup utility (HP Command View 1/8 G2 Autoloader). The URL requires two parameters for the info I want to retrieve. I have searched for a few hours and have tried numerous combinations to get the data but the parameters aren't being executed. I have escaped the URL as well.
I am trying to connect to the web interface found at [URL] using curl. This first requires login information to be entered at [URL], but I am having an issue with the login process. I am trying to submit the following form via POST:
Code: <form action="j_security_check" method="post" id="login_form" name="login_form"> <center> <table style="background: #cac1cf;FONT-SIZE: 12px;"> <tr> <td align="center" colspan="2">Please enter your username and password:</td> </tr> <tr> <td align="right">Username</td> <td> <input name="j_username" style="width: 250px" id="j_username" type="text"/> </td> </tr> <tr> <td align="right">Password</td> <td> <input style="width: 250px" name="j_password" id="j_password" type="password"/> </td> </tr> <tr> <td colspan="2" align="center"> <input value="Enter" name="enter" type="submit"/> <input value="Clear" name="Clear" type="reset"/> </td> </tr> </table> </center> </form> The command that I am using for this is the following:
Code: curl -c cookies -b cookies -L -d "j_username=user%40domain.com&j_password=pass" [URL] The command is properly formatted as far as I can tell. I tested it with another website using a similar authentication scheme using different POST variables specific to the form and it worked fine.
When I run the above command with the -v tag, it reveals this: Code: * Connected to lcl.uniroma1.it (151.100.4.74) port 80 (#0) > POST /sso/j_security_check HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: lcl.uniroma1.it > Accept: */* > Content-Length: 44 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 408 The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser < Date: Sat, 29 Jan 2011 15:26:41 GMT < Server: Apache-Coyote/1.1 < Content-Type: text/html;charset=utf-8 < Content-Length: 1554 < Connection: close < { [data not shown] 103 1554 100 1554 0 52 5081 170 --:--:-- --:--:-- --:--:-- 10223* Closing connection #0
I cannot tell why the login timeout is expired when I try to do this, and my investigation toward this end has been fruitless. I saw a brief snippet on Google that vaguely suggested that the underscores in the domain name were at fault, but replacing these with their encoded counterparts did nothing to resolve the issue (that, and underscores should be fine when sent unencoded according to the standards). I have extensively perused the man pages and have come up with nothing to adequately explain this behavior. I also talked to a friend who has worked with curl in his line of work, but he mostly has experience in the context of PHP and has not dealt with this issue before. I am running GNU/Linux 2.6.35-22-generic-pae.
I had edited the bashsr file wrongly in my ubuntu while trying to put a "export" command in bashsr for javac. Next when i am writing sudo , its saying : Command 'sudo' is available in '/usr/bin/sudo' The command could not be located because '/usr/bin' is not included in the PATH environment variable. sudo: command not found
If a wget download is interrupted (like if I have to shutdown prematurely), I get a wget.log with the partial download. How can I later resume the download using the data in wget.log? I have searched high and low (including wget manual) and cannot find how to do this. Is it so obvious that I did not see it? The wget -c option with the wget.log as the argument to the -c option does not work. What I do do is open the wget.log and copy the URL and then paste it into the command line and do another wget. This works but the download is started from the beginning, which means nothing in the wget.log is used.
I think what i need to do is update the certifcate for the apache2, but I'm not sure how to do this, where to put it, and then which of the thousand apache config lines needs to be changed
I need to mirror a website. However, each of the links on the site's webpage is actually a 'submit' to a cgi script that shows up the resulting page. AFAIK wget should fail on this since it needs static links.
For some odd reason, I cannot post on the ubuntu forum and the LinuxMint forum. Yea, I know.... the irony... I am using Mozilla and have tried Chromium, but that did not fix my problem. When I click on "submit" to post a thread, the page will just say "loading..." and nothing happens for a really long time. Does anyone know what is up? I tried posting on one other forum that I go to often and it seems to work out fine. I haven't tried any other forums though.
I have a computer under Linux with several network cards, for example: eth0, eth1, eth2, eth3. Is there some way to run any downloader, like aria2 or wget only through one interface, for example eth0?
Main problem: for some reason I can't use iptables
I'm doing this wget script called wget-images, which should download images from a website. It looks like this now:
wget -e robots=off -r -l1 --no-parent -A.jpg
The thing is, in the terminal when i put ./wget-images www.randomwebsite.com, it says
wget: missing URL
I know it works if I put url in the text file and then run it, but how can I make it work without adding any urls into the text file? I want to put link in the command line and make it understand that I want pictures of that certain link that I just wrote as a parameter.
i use this code to download :wget -m -k -H URL... but if some file cant be download , it will retry Again and again ,so how to skip this file and download other files ,
I need to download about 100 packages so I'm using wget-list to make it easier. My question however, is once I've made the list (I assume it's in a .txt format), is there a way I can insert comments into it that wget will ignore? Something like this:
#This is a comment http://someurl.com http://anotherurl.com
I have a crontab that wgets a PHP page every five minutes (just to run some the PHP code), and I want to send the output to /dev/null. I couldn't find it in the wget manual.
I m trying to access a site through a perl script for a project of mine, and i use a system call fora wget.
The login form is this
Code:
I mean should i add in the --post-data all the hidden fields? should i try using perl's md5 function for the last two fields? anyone has any idea on what are the elements i should be sending along --post-data?
Is there a way to --load-cookies from mozilla or something similar instead of creating new cookies with wget?
I would like to use wget to downlaod file from Redhat linux to my windows desktop , I tried some parameter but still not work , can advise if wget can do download file from linux server to windows desktop ? if yes , can advise how to do it ?
I am trying to download data/file from web server where htpassword has been setup, I have tried with browser it its working fine, but when trying to same with 'wget' its not working, how to download the file. Below is the command I am using. [URL]... admin[:]password (may be smily get overide)
When I wanna use wget to download some file by http, which conditions fulfilled on the server would make that successful. I mean that such service httpd is running and so on.
A few months ago I have setup a server with three hard disks. The partition mapping the disks as follows:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x7ca36fee
[code]....
Now I have the following problem the LVM file system don't mount properly.If I open the mount point I see only a few files of the LVM disk. If I want to unmount the disk I get the following error:
umount /data/ umount: /data/: not mounted
If I want to mount the volume I get the following error:
mount -a mount: /dev/mapper/gegevens-Data already mounted or /data busy
I'm trying to download two sites for inclusion on a CD:URL...The problem I'm having is that these are both wikis. So when downloading with e.g.:wget -r -k -np -nv -R jpg,jpeg, gif,png, tif URL..Does somebody know a way to get around this?