Networking :: Craft A Valid Http/1.1 Request For Getting Http Headers (not The Html File Itself)
Sep 27, 2010
Using netcat, nc(1), craft a valid http/1.1 request for getting http headers (not the html file itself!) for the main index page of www dot aalto dot fi. What request method did you use? Which headers did you need to send to the server? What was the status code for the request? Which headers did the server return? Explain the purpose of each header.
nc -v www dot aalto dot fi 8080
HEAD / HTML/1.1
host: www dot aalto dot fi
And it returns:
200 OK
Content-Length: 858
Content-Type: text/html
Last-Modified: Thu, 02 Sep 2010 12:46:01 GMT
[Code]....
I really don't know what does it mean. Question 2: Using netcat, nc(1), start a bogus web server listening on the loopback interface port 8080. Verify with netstat(, that the server really is listening where it should be. Direct your browser to the bogus server and capture the User-Agent: header "Direct your browser to the bogus server and capture the User-Agent: header" I don't understand this question.
Hi, In squid i have blocked some sites like facebook and ......I want to know is there any way when user type in his browser like www.facebook.com instead it show something like following it automatically redirect to www.google.com
Error The requested URL could not be retrieved The following error was encountered: Access Denied.
Basically I want to redirect the http request so the user should not see the page not found error but www.google.com page may open automatically.
i want to redirect the packet to proxy server. can u help me.
Present network.
MY internal network ==> switch ==> proxyserver ==> router ==> internet. (for internet i use to connect proxy, in web browser==> lan settings ==> proxy server ip address )
What i want is
My internal network ==> getway or firewall ==> switch ==> proxy server ==> router==> internet. ( where this getway or firewall i can configure for forward http request to proxy server.)
so that i can separate my internal network from intranet but able to access the internet.
I have to retrieve a http request from a particular port using libcurl. I'm using localhost .I am done with retrieving http request using socket programming. how to start integration of libcurl in simple socket programming code.
i am forwarding HTTP request to a internal server, it is quiet successful but access logs donot show the ip of the external m/c. Rather it shows the ip of the machine on which i have enabled port forwarding.
I am trying to istall litb for P2020DS. I got an error the following error:
[hwtesting@HWLSRV1 ~]$ cd /home/hwtesting/ltib-p2020ds-20091119 [hwtesting@HWLSRV1 ltib-p2020ds-20091119]$ ./ltib Don't have HTTP::Request:ommon Don't have LWP::UserAgent Cannot test proxies, or remote file availability without both HTTP::Request:ommon and LWP::UserAgent
add folowwing line to User Privilage section:
hwtesting ALL = NOPASSWD: /bin/rpm, /opt/freescale/ltib/usr/bin/rpmvisudo
I edit the sudoers by visudo command and insert this line just under the following line:
root ALL = (ALL) All
But still I am getting the following:
Don't have HTTP::Request:ommon Don't have LWP::UserAgent Cannot test proxies, or remote file availability without both HTTP::Request:ommon and LWP::UserAgent
My server gets ddos attacks. I dig into access logs and I saw that attacker ips doesn't have valid requests headers, like their browser application info or requested url info.I want to close those connections immediately, and if it's possible block those ips for a time period.Can I do that with Apache and iptables?I searched on the internet but couldn't find useful results. Probably couldn't search for the right words.
I can ssh to my server which is on a LAN accessing the'Net through a Linksys modem/router.I want to be able to configure the Router by using the it's web interface, but the server only has a Command Line Interface and I can only run text browsers like Lynx,hich, although I can log onto the router, the Javascript routines mean that I can't configure the router.I can't access the router's web interface from the 'Net because the router is set up to pass any requests on port 80 to the server.Is there any way I can communicate with the router by sending HTTP requests from my browser external to the LANhaving these relayed to the router by the server and then the server relaying the responses back to my browser.
My application has to listen to http request and it must be able to read the http header and then forward the request from proxy. All these things must be done on C/C++. please help me. Awaiting for your reply.
I've been running a SuseStudio-built VM for a few weeks with no issues. I built a new one recently, and now I can't configure a new vhost in Apache using the http-server module. It gives me this error. Screen shot 2010-11-06 at 11.42.24 AM.png. Why YaST suddenly decided that my hostnames aren't valid?
Cannot get vmware server to work properly running on ubuntu server 9.04
Trying to access the web interface have to highlight the url and keep hitting enter several times to get to the login and after logging in it is real slow and nothing works cannot create virtual machines
I'm trying to see regular http responses from my wireless ipad (victim) from my wired pc (attacker). Everything's working great but I can only see the http requests not the responses.
I've done much reading and googling and tried registering in more relevant forums but some forums were shutdown, so I've come here.
Code: # setup ip forwarding echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward # use ettercap to do the mitm using only mitm sudo ettercap --iface eth0 --text --plugin autoadd --only-mitm --mitm arp:remote /192.168.0.1/ /192.168.0.155/
I installed Nagios on my Ubuntu 10.04 server using apt-get and when I accessed the web console, everything was OK. I made some changes to apache (creating some new virtual sites) and since then Nagios gives me a warning message for HTTP with the message, HTTP WARNING: HTTP/1.1 404 Not Found. The sites that I created are working perfectly. I noticed that the attemps are 4/4. Does this need to be reset or does Nagios automatically reset that once it detects the issue is resolved?
I would like to execute an already written C program that I am running on my embedded Linux, but from afar - through a HTML page. I am running an embedded Linux on my FPGA prototype board with a MicroBlaze soft processor. On this Linux i am running a httpd web server - I can serve html web pages to the outside through Ethernet connection. Now, I have a program written in C in this embedded Linux in /bin/gpio-test that does some stuff with my IO devices. Now I would like to control these IO devices through HTML web page - so I would like to be able to run this gpio-test program from a html web page and possibly send the program some parameters.
I installed WordPress 3.x on my localhost/Apache server, but I can neither install plugins nor update anything.This happens with both the stable WP3.0 version and the 3.1 beta. When I try to search the Plugin Directory from the WP dashboard, I get this message: An Unexpected HTTP Error occurred during the API request.When I run an update, I get a page asking for the login credentials for the ftp user ("To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host."). Since I'm part of the 'ftp' group on the system, I enter my system login information, click Proceed -- and get a blank page that does nothing.
I've gone to YaST, and I see that the system ftp user has a 6-character password (which may or may not be mine). I'm afraid to change it and risk screwing up other ftp-related functions. I'm running openSUSE 11.3, and am obsessive about updating. I will note that I have an old 2WIRE router that often requires me (including Zypper repos) to enter IP addresses instead of DNS-based URLs to successfully download stuff. Not sure if this is related, but just in case...
We're having an issue with HTTP POST file uploads on our two Ubuntu PCs. For some reason, whenever one of our users attempts to submit a file in an HTML form, the request times out, usually with a 500 Internal Server Error message. This problem is not limited to one site, but occurs on all sites that use file uploads. Also, the problem does not appear to be with our network, as a Windows 7 PC on the same network can upload files to the same sites without any difficulties. The problem is not browser-specific; we have tested with Firefox, Epiphany, and Google Chrome and all produce the same results. The issue is relatively new, and was first observed within the last month; before this time, both machines had no problems uploading files.
Does anyone have ANY idea what could be causing this? I've tried a number of things, including rebooting the PCs, rebooting the network, disabling IPv6, etc. I'm not very experienced in Linux system administration, but I can use the terminal and am familiar with some terminal-based diagnostic tools, so if you need any additional info or want me to try something, please let me know! I've exhausted my own computer knowledge with regards to finding a solution to this problem.
when I try to access any page even small html pages it stays like 3 seconds in HTTP request sent; waiting for response. state..even when I use Lynx locally on the server..bypassing any possible network issues..logs dont show a thing..the server itself is a high end server with nothing running on it apart from apache which is not serving anny clients now, firewall is disabled and hostnamelookups are set to OFF.
I have a web page that has links to a php script that sends pdf files to the browser when clicked. The links are like this:
Code: <a href="getfile.php?id=1201234">
The files sent are shown embedded in the browser, which is what I want.
The problem is: the title of the browser window or tab in which the pdf file is opened is "getfile.php?id=1201234"), and not the actual file name of the pdf file.
Is it possible to send the file by a php script in a way that the window/tab title becomes the filename and not the link by which it has been accessed?
I'm running a webserver and i've uploaded serveral .txt files. I want them to be downloadable... For example if someone opens: [URL], to start downloading, not just to open in the browser.
What's the point of checking if my file was tampered (as it says in the debians page [URL] ....) if the signatures are downloaded over http? Didn't the recent incident with Linux Mint had cleared our mind about these things?