Software :: CURL Post Data Command From Shell - HTTP Error Code 408
Jan 29, 2011
I am trying to connect to the web interface found at [URL] using curl. This first requires login information to be entered at [URL], but I am having an issue with the login process. I am trying to submit the following form via POST:
Code:
<form action="j_security_check" method="post" id="login_form" name="login_form">
<center> <table style="background: #cac1cf;FONT-SIZE: 12px;">
<tr> <td align="center" colspan="2">Please enter your username and password:</td>
</tr> <tr> <td align="right">Username</td>
<td> <input name="j_username" style="width: 250px" id="j_username" type="text"/> </td>
</tr> <tr>
<td align="right">Password</td>
<td> <input style="width: 250px" name="j_password" id="j_password" type="password"/> </td>
</tr> <tr> <td colspan="2" align="center">
<input value="Enter" name="enter" type="submit"/>
<input value="Clear" name="Clear" type="reset"/>
</td> </tr> </table> </center> </form>
The command that I am using for this is the following:
Code:
curl -c cookies -b cookies -L -d "j_username=user%40domain.com&j_password=pass" [URL]
The command is properly formatted as far as I can tell. I tested it with another website using a similar authentication scheme using different POST variables specific to the form and it worked fine.
When I run the above command with the -v tag, it reveals this:
Code:
* Connected to lcl.uniroma1.it (151.100.4.74) port 80 (#0)
> POST /sso/j_security_check HTTP/1.1
> User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: lcl.uniroma1.it
> Accept: */*
> Content-Length: 44
> Content-Type: application/x-www-form-urlencoded
>
} [data not shown]
< HTTP/1.1 408 The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser
< Date: Sat, 29 Jan 2011 15:26:41 GMT
< Server: Apache-Coyote/1.1
< Content-Type: text/html;charset=utf-8
< Content-Length: 1554
< Connection: close
<
{ [data not shown]
103 1554 100 1554 0 52 5081 170 --:--:-- --:--:-- --:--:-- 10223*
Closing connection #0
I cannot tell why the login timeout is expired when I try to do this, and my investigation toward this end has been fruitless. I saw a brief snippet on Google that vaguely suggested that the underscores in the domain name were at fault, but replacing these with their encoded counterparts did nothing to resolve the issue (that, and underscores should be fine when sent unencoded according to the standards). I have extensively perused the man pages and have come up with nothing to adequately explain this behavior. I also talked to a friend who has worked with curl in his line of work, but he mostly has experience in the context of PHP and has not dealt with this issue before. I am running GNU/Linux 2.6.35-22-generic-pae.
I would like to find out how I would use both curl and wget to sent an http post to get the hostnames of a few servers. I know am not even given any work of anything I have done, but the reason is that I am really lost, and I do not even know how to start it.
when I execute the command from the shell command line - it works and no error code.if I do the exact same command from a perl file - it fails with code 32512.the file is created from the same perl script that runs the command that fails. file permission is 0664.
I think what i need to do is update the certifcate for the apache2, but I'm not sure how to do this, where to put it, and then which of the thousand apache config lines needs to be changed
We're having an issue with HTTP POST file uploads on our two Ubuntu PCs. For some reason, whenever one of our users attempts to submit a file in an HTML form, the request times out, usually with a 500 Internal Server Error message. This problem is not limited to one site, but occurs on all sites that use file uploads. Also, the problem does not appear to be with our network, as a Windows 7 PC on the same network can upload files to the same sites without any difficulties. The problem is not browser-specific; we have tested with Firefox, Epiphany, and Google Chrome and all produce the same results. The issue is relatively new, and was first observed within the last month; before this time, both machines had no problems uploading files.
Does anyone have ANY idea what could be causing this? I've tried a number of things, including rebooting the PCs, rebooting the network, disabling IPv6, etc. I'm not very experienced in Linux system administration, but I can use the terminal and am familiar with some terminal-based diagnostic tools, so if you need any additional info or want me to try something, please let me know! I've exhausted my own computer knowledge with regards to finding a solution to this problem.
I'm trying to send files from a Unix server using http/curl to a Linux webserver running Apache. I get the following PUT error message when and the file does not send:
<title>405 Method Not Allowed</title> </head><body> <h1>Method Not Allowed</h1> <p>The requested method PUT is not allowed for the URL
There are a few web databases (also including my own php-based pdf manipulator), where I need to fill a html form, and upload file attachments.
About one year ago, these sites stopped to work correctly, when using Firefox (but they work from Internet Explorer). The problem concerns file upload. Other users here also experienced this, and no firefox update corrected the problem in the past year (I am using Firefox 3.6.9 now, and the problem is still there).
When debugging my pdf creator, I found that the attachment-type of any file upload made by firefox is "text/html", irrespectively of what is the type of the uploaded file. Whilst files uploaded by IE have the correct attachment-type.
I am using the following software stack:Linux version 2.6.32-21-generic (buildd@yellow) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #32-Ubuntu SMP Fri Apr 16 08:09:38 UTC 2010 (Ubuntu 2.6.32-21.32-generic 2.6.32.11+drm33.2) (Kubuntu) cisco anyconnect vpn client 2.3.2016, Mozilla Firefox 3.6.8
The problem I have is that once I join my company vpn, I have full access to corporate services confluence, jira, servers, etc. However when I use firefox to try and resolve a jira and post a body of text the connection timesout.
If i use any other browser it works fine, if slow, if i transition workflows it works fine, and if i use windows and firefox with the same cisco client it works fine.
This appears to be a specific issue with Firefox. I have noticed that in general firefox is slower on ubuntu than on any other platform.
simple scan error as follows: Failed to save file ImageMagick returned error code 11 Command line: convert -adjoin /tmp/simple-scan-DA9MBV.jpg /tmp/simple-scan-XCK4BV.jpg /tmp/simple-scan-NZVYBV.pdf Stdout: Stderr: using karmic note: I have apparmor extra profiles installed but didn't notice one that related to simple scan or imagemagick. Red herring or not?
I ran across the above article, which described a DoS attack in which requests are sent very slowly to the Web server. I'm running lighttpd 1.4.28 on a Gentoo Linux server, and I'm wondering if there is anything I could do in preparation to defend against such an attack.
A bug report [url] seems to indicate that there was a patch in place already against this sort of attack, but I wanted to be sure that was the same thing and if there was anything else I needed to do.
I have a really weird (but consistent) problem with my Kubuntu 10.10 install: I cannot post some HTTP forms.
First off, this is a client PC problem. My squirrelmail on the server works fine. I just use squirrelmail 1.4.17 to troubleshoot the ubuntu desktop problem
I used an old (07.04) Ubuntu install which worked fine. Then I wiped the disk and installed Kubuntu 10.10 on the same hardware. Everything works but **some** HTTP post does not work (I can log in but not send mail or save draft). I noticed I cannot log in to Yahoo, for example.
My webhosting account can display the apache access_log. When I hit the <Send> button the POST request never arrives to the web server.
I use a router (Dlink DL-604) behind a DSL modem and ooma box. There is a Windows 7 PC and a Kubuntu PC connected to the router. I can use squirrelmail just fine from the Windows PC.
I tried several steps: - reinstalled Kubuntu - installed Firefox and Chromium (on top of reconq) - ran from a CD on my other (Windows 7) PC - installed Wireshark and compared the traffic (but was unable to pinpoint a problem)
The result was the same: the <Send> button just keeps waiting; the POST request never makes it to the web server.
This sounds (and is) scary and suspect. The fact that the "demo" Kubuntu install (from the CD on my other Windows PC) using the reconq exhibits the same problem on a totally different hardware leads me to believe this may be related to Kubuntu. For example, I had to type this very message on the Windows PC as I could not post it on the forum from my Kubuntu box.
I'm having a few problems coding a post install script for my custom RPM package. I'm putting the script directly in the %post section of the spec files. For example if I wanted to add a user after the package is installed I would add the following code. The problem is capturing the output of the commands which should be stored in the variable.
The problem seems to be that myuser is always null even if bob exists in /etc/passwd.What's wrong with this code and should I use an external script instead?
Is there any curl API to configure only the required protocol. If I have proper openssl installed, the installed curl will have all the protocols (like HTTP, HTTPS, FTP, File etc...) supported by default. Is there any way to allow or disallow only some of the protocol at the runtime. Say I need to support only HTTPS, FILE and I dont want to allow HTTP. Is there any way to do this?
I've got a CGI that I'm trying to debug. Apache gives me an ambiguous 500 error; it would be nice to see the raw output via the shell. I've got the POST request w/headers as follows. What's the best way to troubleshoot this?
POST /cgi/packBoxes.cgi HTTP/1.1 Host: 70.87.60.214 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
I've been pulling my hair out trying to get wget to post data to a webpage to automatically download some files. I've tried many methods of syntax, but wget always downloads the html for the login page. A snippet of code I found in the login html page is below. Some of the characters are japanese, because it's a japanese website.
I have two suse linux server. Recently we got the license key. In one server I could able to register successfully. Another server is throwing an error. I am using yast-software- Novel customer center configuration. After selecting configure now, I am getting error has follows Execute curl command failed with 7 curl: (7) could't connect to host.
I need to download a file from a website which has a URL formatted like:
[URL]
This redirects to a .zip file which has to be saved. There is also a need to authenticate based on username and password.
I tried to use wget, curl and lynx with no luck.
UPDATE:
wget doesnt work with redirection. It simply downloads the webpage instead of the zip file. curl gives the error "Maximum redirection exceeded > 50 " lynx also gives the same error.
I want to prevent code from making http connections to other, specific hosts. My understanding is this can be done in /etc/hosts.deny. What would that look like?
I am very new to shell scripting.How does one pass a command-line parameter to a shell script?for the below program #/bin/bash mount -t cifs -o user=ramkannan,password=Linux123@ //10.200.1.125/ramkannan /MT cd /MT/test date=`/bin/date "+\%Y-\%m-\%d-\%H-\%M-\%S"` mysqldump -uroot -pram2@ employeedb > $date.sql gzip $date.sql
I want to pass parameter for everything,i tried in google and did but iam getting error while passing parameter to all
#/bin/bash mount -t cifs -o user=$1,password=$2 //10.200.1.125/ramkannan /MT cd /MT/test date=`/bin/date "+\%Y-\%m-\%d-\%H-\%M-\%S"` mysqldump -uroot -pram2@ employeedb > $date.sql gzip $date.sql
i was getting error while passing parameter to all.
I have written the following script in my linux server to add users for LDAP database.But i can't able to run this.
The script is as following
#!/bin/bash echo "Mention the username which you want to convert LDIF format" read username if ["$username" -e "/ldiffile/passwd"]; then echo "Username already exists" else cat /etc/passwd | grep -i "$username" >> /ldiffile/passwd fi The output which i got : . ldapadd.sh Mention the username which you want to convert LDIF format yal2361 -bash: [yal2361: command not found
To make an rpc call I need to sent an xml file as post data.I know how to do this with wget. It works fine when I have the xml already filled in (depending on the node values the response from the call is different).owever I want to be able to edit part of this file, and then sent that as post data using wget. can edit this file using sed (I dont want to rewrite the files each time this gets used; and it does get used alot, with alot of different values).
I'm on Ubuntu 11.04. I have read around about how to use curl to download a list of URLs from a text file, and everyone says to use
Code: curl -K URLlist.txt This is what the curl man page says as well. However, for even a simple file with one URL, this command outputs a bunch of weird symbols for me instead of downloading the file. For example, I have a text file "test.txt" with one line in the following format:
Code: url = "http://www.example.com/image.jpg" I use the curl command to download this file:
I'm on Ubuntu 11.04. I have read around about how to use curl to download a list of URLs from a text file, and everyone says to use Code:curl -K URLlist.txt. This is what the curl man page says as well. However, for even a simple file with one URL, this command outputs a bunch of weird symbols for me instead of downloading the file.For example, I have a text file "test.txt" with one line in the following format:
Code: url = "http://www.example.com/image.jpg" I use the curl command to download this file:
I have a strange problem with using curl function in php on slackware machines. So far I tested this on 2 PCs. I din't test it on any other distributions, so I don't know if it's only slackware problem.
Problem is that I can't use php curl in normal way. It can be tested wit simple code:
Code: $ch = curl_init("url"); $content = curl_exec($ch); if (curl_error($ch)) { echo(curl_error($ch) . "<br>"); } echo($content); When I'm opening this php script in browser I'm getting error message:
Code: Couldn't resolve host 'google.lv' but I can use curl command in terminal, also it works when I run this php script in terminal like:
Code: php ./curl_test.php So problem is not in curl or php itself, but in apache, because this happens only when this script is running in apache.
Searching google a lot, I got to conclusion that apache can't read /etc/resolve.conf file during startup. Strange thing is that it happens only when httpd starts during system startup, but if I'm stopping httpd and starting it again manually, it works as it should until I'll reboot my PC. Restarting httpd also do not work, I need to do start -> stop.
I think that apache reads /etc/resolve.conf only when it starting up and as I have DHCP, maybe network is still not ready at the moment of httpd startup.
I didn't try to configure php as php-cgi instead of apache module, I think then it would work normally because each php script would be separate process and it would read dns information each time, so the same like starting php script in terminal with command:
Code: php ./curl_test.php I think there are some ways to workaround it, e.g. delay somehow start of httpd during system startup, or something like that, but I want to find reason of this problem and make it work without any workaround.