Ubuntu Servers :: Nagios On 10.04 Server Using Apt-get - HTTP WARNING: HTTP/1.1 404 Not Found
Aug 4, 2010
I installed Nagios on my Ubuntu 10.04 server using apt-get and when I accessed the web console, everything was OK. I made some changes to apache (creating some new virtual sites) and since then Nagios gives me a warning message for HTTP with the message, HTTP WARNING: HTTP/1.1 404 Not Found. The sites that I created are working perfectly. I noticed that the attemps are 4/4. Does this need to be reset or does Nagios automatically reset that once it detects the issue is resolved?
Using netcat, nc(1), craft a valid http/1.1 request for getting http headers (not the html file itself!) for the main index page of www dot aalto dot fi. What request method did you use? Which headers did you need to send to the server? What was the status code for the request? Which headers did the server return? Explain the purpose of each header.
nc -v www dot aalto dot fi 8080 HEAD / HTML/1.1 host: www dot aalto dot fi And it returns: 200 OK Content-Length: 858 Content-Type: text/html Last-Modified: Thu, 02 Sep 2010 12:46:01 GMT [Code]....
I really don't know what does it mean. Question 2: Using netcat, nc(1), start a bogus web server listening on the loopback interface port 8080. Verify with netstat(, that the server really is listening where it should be. Direct your browser to the bogus server and capture the User-Agent: header "Direct your browser to the bogus server and capture the User-Agent: header" I don't understand this question.
I'd like to report an issue I've had with Ubuntu server ISO. I downloaded ubuntu-9.10-server-i386.iso by HTTP on ubuntu's website and burned it on a CD. It doesn't work well. I got an error in udevadm sys/devices/pci0000 etc. it was a problem with the hardware, but it seems that it's the ISO that is corrupted. I checked the MD5 checksum and it's not good. Then I download the same ISO a second time (by HTTP) and same problem.
So it seems to me that the ubuntu-9.10-server-i386.iso that we can download by HTTP is not the same as the torrent one. Maybe I'm wrong. Anyway, if I'm right I hope this information will be useful for administrators.
How can I set my server to listen at a different port for http access. I would like to use port 8080 (to circumnavigate isp blocks). Also can I do the same thing for sftp connections?
I'm thinking about some ways to limit access to my web-server. It runs Nginx and php in FCGI. The server contains a large amount of information. The data is freely available and no authentication is required but other companies might like to mirror it and use on their own servers.
The requests could be limited on different levels: IP, TCP, HTTP (by nginx) or by the php application. I found some solutions (like Nginx's limit_req_zone directive), but they do not solve the second part of the problem: there's no way to define a whitelist of clients who are allowed to use the data.
I thought about an intellectual firewall that would limit the requests on IP basis, but I'm yet to find such device. Another way was to hack some scripts that would parse the log file every minute and modify the iptables to ban suspicious IPs. It would take days and I doubt this system will survive, say, 1000 requests per second.
Perhaps, some HTTP proxy, like Squid, could do this?
friends < I tried installing amanda like this: yum -y install amanda* Gathering header information file(s) from server(s) Server: Fedora Linux / stable for Red Hat Linux 3ES (i386)
[code].....
I tried yum because when i did the simple download o the newest amanda rpm and tried t install.. i get the following dependency problem..
[root@Mixer amanda-dwnload]# rpm -ivh amanda-backup_server-3.1.0-2.rhel4.i386.rpm error: Failed dependencies: libcurl.so.3 is needed by amanda-backup_server-3.1.0-2.rhel4 libidn.so.11 is needed by amanda-backup_server-3.1.0-2.rhel4 tar >= 1.14 is needed by amanda-backup_server-3.1.0-2.rhel4
I do have yum installed -- that s when i do
yum --version 2.0.3
i see a vers num.it is a redhat system
uname -a Linux Mixer 2.4.21-58.EL #1 Tue Nov 4 11:55:15 EST 2008 i686 i686 i386 GNU/Linux --- cat /etc/issue Red Hat Enterprise Linux ES release 3 (Taroon Update 9)
When I try to add software sources (specifically those for Scratchbox, but I get the same error with everything), I get an error message: "http://http not found". Obviously that is not a valid APT line and I have no idea what it is doing in my software sources. How do I take it out?
I have a problem with GPG key, when I tried to run #yum updateI have got this errorwarning:rpmts_HdrFromFdno: Header V3 DSA signature: NOKEY, key ID d05c057cGPG key retrieval failed: [Errno 14] HTTP eRROR 404: Not foundI used Centos 5.5.
I am trying to edit my http configuration (menu System -> Administration -> Server Settings -> HTTP) and it seems to be impossible. My Server Name comes up empty and I want to change the default Webmaster email address root@localhost to something else, but I can't change anything. I enter mu new server name and e-mail address, but when I click on the OK button I get a popup box which asks me if I want to save and exit. I click on the Yes button, and the box disappears. HTTP Server configuration does not exit and my changes are not saved.
I have Ubuntu Server (x64) installed on my box with Apache2 and Squid. For awahile port 80 (http) was fine, I could update packages and use wget. Then one random day port 80 became blocked for incoming traffic. I couldn't use apt-get and had to change to an ftp mirror to update. Also wget is not working.
I will be setting up a web server at my house. It will be a simple page for my family to keep in touch and maybe some other stuff. Here is the problem: I believe my ISP blocks port 80. So when setting up the firewall and it list the normal port 80 am I able to edit to say 8080? I have a ddns already setup for my router and I am waiting for an email back from DynDNS.com on setting up a new domain to forward to my already setup hostname. I just need to get everything redirected to another port beside 80.
I have a server (fedora 11 , LAMP). I want to know if I can upload something to my server via http (I mean from WAN),and this data stream can directly run into MySQL database . Do I need to write some special codes on my web page , or just change apache's configure file
I've got an old laptop with F15 installed. I want to use it together with two usb-webcams to monitor my wood boiler in the basement. And I want to stream it via http. I tried zoneminder and it didn't found my cheap cams. I tried vlc but it didn't work that well. Are there any other options to put out the streams on a webpage? Sent from my Transformer TF101 using Tapatalk
What is the best way to go about setting up multiple virtual hosts on the same box, one using http and one using https/ssl? I'd like to serve them from the same ip address if possible; I know it's possible in apache 1.3.
A church I've been working with has a CCTV system that has a web interface for viewing the camera feeds. We need to see the page from the outside, but it is just an HTTP page, no encryption. The box itself does not accept any sort of SSL encryption. How I can get this on the net in a secure way? At worst I could set up a remote desktop type solution, but I was really hopping I could use some apache magic and just re transmit the page to https and ssl encrypted.
I am new to web server support. I have a request from my management to modify the logging slightly. Effectively I need to redirect a custom string from our http response into the apache access logs. When a user navigates to our site they receive a "dye" number that is associated with them. This number follows them to whatever cluster they are directed too. The string is formatted as such, com-company-dye: d0a2#6dfce. I need that that header dye to appear in the access logs so we can use that dye number as a key for troubleshooting issues though out our various monitoring systems.
I just done a brand new install of fed12 and did all the yum updates. Apache seems to start ok and I always liked the http config tool but it won't run on Fedora 12. I downloaded and installed system-config-httpd.noarch 5:1.4.6-1.fc12 and it all went fine but when I try to start it I get the usual box asking for my root password, I type it in and press enter, the box disappears and then....nothing. If I run system-config-httpd in a terminal I get the same box asking for root passowrd but when I enter it I get a long scroll of text which ends with:
line 4: 2137 Aborted (core dumped) /usr/bin/python /usr/share/system-config-httpd/ApacheConf.py
I don't know what causes this. Is there any way to get the http config tool working?
I have fedora 13, and installed asterisk.. Before I had centos and have my asterisk running to test and learn.. but in fedora I see there is a http miniserver for admin asterisk..I edited enable, port and ip in the file http.con but when I try URL...I got 404 page no found Asterisk server.
It appears that my ISP is blocking port 80, so I can't set up a proper website on my home computer. I'd like to choose a different port to use (they block 443 also), and I'm not sure how to do this with Fedora (or any Linux flavor for that matter
I need to install a program by using the address http://255.255.255.255. However, when I type this address in my browser, I get the following error: "Failed to connect. Firefox can't establish a connection to the server at 255.255.255.255. Though the site seems valid, the browser was unable to establish a connection." Is there an easy way to put this site into the air?
I have 2 web server in my office : http and https. You will find attached the httpd.conf and ssl.conf. I can acces the https server from home, but not the http one.
What I did : configure the router to forward port 80 to my fedora 11 machine open port 80 with system-config-network created a virtualhost
The same exact steps have been done for port 443
I can access both server locally but only the https server remotelly.
Here are my iptables :
Code:
you can try to acces my servers using [url]
I made httpd to listen to port 8080, and done all the port forwarding/opening stuf, and it works. so is it a bug ?
Finally found my error seams like turning off UseCanonicalName to off did the trick
I really think it's a bug now. It was definitively working last week, I just added content to the main host of my website, and now i can't acces it from port 80. If someone think it's not a bug or find someting missing/wrong in my conf file.
So we have DNS round robin set up for 4 servers. If we ping dns name (basically an alias) server_connect it resolves with different IP address in round robin format. I.E. x.x.x.1 x.x.x.2 for the 4 different server IP addresses. When we do nslookup server_connect it will come back first time as server1_connect server2_connect through server4_connect so the server is able to resolve through ping and nslookup resolving the initial dns name (alias) to the dns name associated in the round robin. Problem is when we try to connect with http or telnet it comes back host unrecognized. I can put one of the 4 round robin servers in /etc/hosts and it connects fine so I'm thinking that either one of three things. 1) ttl 2) It does double connection first to identify itself to the round robin server and then handshake but second time it hits for the handshake the IP and dns name is different than what it expected so it fails. 3) Since we are trying to telnet to dns alias and it is returning different dns name it fails.
2 and 3 seem most promising but now I'm at a stand still.Anyone else come across this issue and if so how did you resolve.
I am trying to setup my webserver and I am trying to make a website to run under suexec but somehow I cannot start my apache it directly fails and SELinux is giving me errors and don't really know what to do with it, it is giving me some command to type but not sure if this will make my server less secure. The SELinux error is as follow:
Code: Summary: SELinux prevented httpd reading and writing access to http files.
Detailed Description: SELinux prevented httpd reading and writing access to http files. Ordinarily httpd is allowed full access to all files labeled with http file context. This machine has a tightened security policy with the httpd_unified turned off, this requires explicit labeling of all files. If a file is a cgi script it needs to be labeled with httpd_TYPE_script_exec_t in order to be executed. If it is read-only content, it needs to be labeled httpd_TYPE_content_t, it is writable content. it needs to be labeled httpd_TYPE_script_rw_t or httpd_TYPE_script_ra_t. You can use the chcon command to change these contexts. Please refer to the man page "man httpd_selinux" or FAQ [URL] "TYPE" refers to one of "sys", "user" or "staff" or potentially other script types.
Allowing Access: Changing the "httpd_unified" boolean to true will allow this access: "setsebool -P httpd_unified=1"
Fix Command: setsebool -P httpd_unified=1
I will write down how I did setup my server so maybe you can see a mistake I did. First I changed my Apache httpd.conf I added the following to it: Code: NameVirtualHost 192.168.1.2:80 <VirtualHost 192.168.1.2:80> ServerName localhost DocumentRoot /var/www/html DirectoryIndex index.html index.html index.shtml index.php </VirtualHost>
Then I created the username "ulyaoth" with the group "ulyaoth" as I specified with my suexec, then I created all the directories as specified in my httpd.conf and "chown ulyaoth:ulyaoth (dirname)" them to the right group and username.
I have some photos posted in [URL] Into each caption I've added a link to my server to let friends download them in larger sizes. tail -f access_log only displays some of those accesses, I don't understand why. If I reload large image page an input is recorded and displayed from access_log What can be happend?
I'm working on an application that requires a large amount of storage space and I want to handle storage `in-house` (Much cheaper than, say, S3) so we will have multiple servers (Initially 4) with large amounts of storage (6TB each). The storage will need to be very flexible and configurable, each piece of data should be replicated on at least 2 servers and must be easily readable/writable from ether an API of a UNIX device/file/folder like a normal drive, I don't mind which. We must also be able to easily offload content to our HTTP CDN (Edgecast), it doesn't need to have built in HTTP support but if it doesn't I'm going to have to write something to get the files onto HTTP so they can be pulled by the CDN.
I've looked at a lot of solutions including Eucalyptus Walrus OpenStack Object Storage MogileFS MongoDB GridFS (I'm not sure why, it just sounded cool =) ) and some others which I can't remember
All the servers will be running RHEL 6, they have 4x1.5TB drives which will be RAID1'd into a single partition. All the servers have 1GB/s connections between them and 100MB/s connections to the internet with unlimited bandwidth. They have 2x2.66ghz processors. I understand there isn't a single, perfect answer but it would be nice to get some pointers.