I run a dedicated opensuse 11.1 server with apache (2.2.10-2.9.1) installed. This box has been running for nearly 2 years hosting several low traffic websites. I must admit that I did not give lots of love/attention to the machine over that period. It ran rock solid. The different websites each have their own associated user account and are stored in their own /home/name/public_html/ folder. It was setup through Webmin. Each domain name is also linked to a unique IP (the box has multiple available IPs), however this is configured at my domain provider. All in all a very simple and straightforward setup that never let me down most of the past decade.
Recently the sites on the machine were no longer responding. This happens each other year or so since I write my logfiles to a separate partition. Was df -h and indeed, partition was full. Removed logfiles manually, and while I was at it I decided it was time to run an online update (yast2 / online update that is). Rebooted machine after yast telling me to do so. Sites are no longer working. Can no longer login to webmin. Only thing what works is the 'root' webpage (/srv/www/htdocs folder), which makes me rather clueless as the other sites are just not responding at all, not even a timeout or error message.While I know that deleting logfiles manually is quite stupid, I've done it fore and not really ended up in trouble.
Hence my questions: does this sound familiar to anyone and do you mind to give me a clue about where exactly I should start to look. It's been ages since I actually administrated apache, so I might overlook the obvious. Long story short: any tips are very welcome about what I should check first, what config files might have been changedith the update, etc ...
I currently have an Apache Web Server running on Ubuntu 10.4 and I use a DynDNS service to make them accessible to the outside world via a domain and/or subdomain.
This works fine from access outside of the network and all subdomains resolve to the correct directory.
The problem I am having is with accessing a subdomain over my internal network.
I can access the Web Server using the server's IP address: http://192.168.1.123/ but this always takes me to the same virtual host and I don't know how to distinguish between different virtual hosts (different subdomains).
Ideally I would like to access the same subdomains using http://<subdomain>/ where <subdomain> is the same as the subdomain attached to the external domain name.
apache virtual host to limit the concurrent connections of virtual hosts? Taking into account the host of each virtual user's home directory can also have more than one subdirectory, which should be restricted to a subdirectory. Is beyond the control of the operation of these sites in a subdirectory. Best local restrictions or limitations to the overall situation.
When I converted to OpenSUSE 11.2, and went through YaST HTTP Server Configuration, creating my virtual hosts under the Hosts tab, YaST combinedm all int ile,"/etc/apache2/vhosts.d/ip-based_vhosts.conf".I did google and read, [URL]for further assistance.I'd like each virtual host to have its own file under vhosts.d, and wondering why YaST did not do that.The file /etc/apache2/httpd.conf laid out the file structure, and all vhosts.d/*.conf files are included.Is there a way to tell YaST to create separate files for each vhost, or does the user have to manually do it?
I'm wondering if this is even possible before I start the learning curve with Ubuntu and apache virtual hosts.
I have a static external IP address that resolves to the various domain names I will be using. I have a web server inside my network with a private IP address and any http request to the firewall is forwarded to the webserver on the appropriate port. This setup works well when using the same web page/configuration for all of the domains.
Will it be possible to use named virtual hosts in this configuration, or will the NAT'ing interfere?
What is the (officially) proper way to configure Apache so that a given IP address can have two or more virtual host names, each going to different distinct configurations (e.g. with different DocumentRoot, Alias, etc), and also do this for the IP address so that it goes to a designated configuration rather than defaulting to the first or a random host name?
Apache documentation does not appear to address this. If so, it has it hidden in a non-obvious place.
I run couple of sites on a virtual hosting environment and I am in need of adding additional SSL for a different domain name. From what I read on some forum topics indicate that SSL cert requires different IP address. meaning one cert for each IP. Is this true? If so, then I'm having some difficulties understanding the benefits of running virtual host if a server can't host multiple secured site through single IP. Any way to run multiple ssl site within virtual host environment. I'm hoping for a possible workaround.
I am trying to configure Apache to handle virtual hosts. For this I un-commented the line in httpd.conf that say
Include /etc/httpd/extra/httpd-vhosts.conf Then I included the following in httpd-vhosts.conf: <VirtualHost *:80> <Directory /var/www/git.localhost/gitorious/public> Options FollowSymLinks AllowOverride None Order allow, deny Allow from All .....
I'm having trouble implementing SSL on a AvantFax login screen. I've created the the certificates and keys and have them stored in /etc/apache2/ssl and I'm sort of stuck now. I've been following a guide but any changes to the conf files leads to errors. The system I'm using is Debian 5.0
First of all I've looked at "similar threads" without finding an answer. I'm setting up Name-based VH using an IP as the base. The OS is CentOS 5.4
My config looks like:
NameVirtualHost 12.345.678.90:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol.
[Code]...
The names are of course not in DNS so to access the server from my local Windows machine I had to use the hosts file: 12.345.678.90 dev1 Entering dev1 in my browser *does* take me to the server but it takes me to default VH (DocumentRoot /var/www/html). What am I overlooking?
I have just setup two Apache virtual hosts and I was wondering how I could link them to different domains so that they could be accessed from another machine.
This really has me baffled! I'm running Apache 2.2 on Windows Server 2003 (I know, my first blunder). I can edit existing virtual hosts without a problem, for example, I can change this existing vhost to point to a different document root and it works fine:
But the moment I try to add a new virtual host, it doesn't get recognized! When I try to browse to it in a browser, I get a "Server not found: Firefox can't find the server at newsubdomain.mellemallc.net."
I have Apache up and running and have a few virtual sites enabled. All these sites belong to the same user and group and the directory root for each site is in /home/{same-user}/www/{site-name}/htdocs/
I use Samba to connect from Windows to these directories and by default, files and directories are saved as the {same-user} and {same-group}. My question is, would it cause a problem if I changed the user and group in the virtual server directives in /etc/apache2/sites-available/site.conf files, giving apache permission to write to these files and directories. In the past I have changed the user and group to www-data (the default) but this seems inefficient an cumbersome compared to what I intend to do.
I use the server mostly for development, although at times I have a small site or two available to the public. Before I do this I want to be sure I'm not leaving a gaping security hole by changing these things. If this is all wrong, what is the standard way of running virtual hosts from apache and what is the standard document root for virtual sites?
I am trying to run two web servers (Virtual Hosts) on a single Linux Centos 5.5 box with a single IP address 192.168.0.182. I did all the pre-installation requirements such yum install mysql, yum install mysqladmin, service httpd start, service mysqld start etc etc.In /var/www/html directory, I have two folder called server1 and server2. These two folders have the necessary web server php script files and folders. I opened the browser and managed to install the script on one web server successfully. When I put the IP address 192.168.0.182 on the browser address bar, the page loads without any problem. Now I would like to be able to install the other web server script and I don't know how to?Here is my httpd configuration;
I don't know if this should be a followup to my prior topic [URL] ....
Each of the pieces I've installed all have an "Alias" directive in the conf file to link the directory where they live to be present on my server. For instance, DotClear lives in /usr/share/dotclear/web/ and there is a directive
Code: Select allAlias /dotclear /usr/share/dotclear/web that directs http://myserver/dotclear to that site.
Now, I've set up VirtualHost entries for my DotClear and Owncloud with their own hostnames.
The problem is when I go to [URL] ...., I get to my mythweb site.
This is not so good. So, for the sites that have their own hostnames, I removed the "Alias ..." directive. Of course, now I can't get to the hosts by going to the primary site which is probably fine, but I also still get my mythweb since that doesn't have it's own virtualhost entry.
This doesn't seem like correct behavior. Is there a better place to put the "Alias ..." directive so that it only works from one site and not all of them?
I am also thinking I should just link the directories into /var/www/html, but I'm not sure that's a better solution.
How to best manage both http and https pages on the same apache-server without conflicts. For example, if i have both 000-default.conf and 000-default-ssl.conf pointing to mydomain.com, and don't want users who visit mydomain.com without specifically type the https-prefix to be redirected to the https-page - how to handle users using browserplugins such as https-everywhere etc?
Another option would be to create a subdomain ssl.mudomain.com and have users who want to reach the ssl site to have to type ssl. I have tested several things with https everywhere enabled in my own browser, and it seems really hard to make this working the way i want, in one way or another i always end up getting redirected to the ssl-site automatically.
The reason i need this to work is because i run one site that i don't care much about SSL, that is the "official" part of that site, and i also host some things for friends and family on the SSL-part. This would not have been a problem if it wasn't that i use self-signed certificates for my ssl-site and the major user become afraid when a certificate-warning pops up in their browser and therefor leave the site.
i use virtual hosts to develop several web applications. These are located in my home folder under /home/user/projects/project After a fresh installation, i always get a 403 forbidden error. After googling and reading on this forum, several solutions are mentioned for this problem. But i can hardly believe putting using a chmod 755 on my home folder is a correct solution. What is the correct way of doing things in this situation?
I have a debian squeeze box with dual NICs that I'm trying to configure with two virtual hosts. I'd like to have one of these machines act as a router for the 2nd NIC so I can plug in a switch and have a separate subnet.
Something like: - Openwrt router 192.168.1.1 (firewall/vpn/stats for 192.168.1.0/24 domain) - KVM machine with 2 NICs (192.168.1.2) - Virtual machine #1 has a fixed IP of 192.168.1.3 (virtual nic) - Virtual machine #2 has a fixed IP of 192.168.1.4 (virtual nic) but also controls the 2nd nic and routes 192.168.80.0/24
I'd like to use the 192.168.80.0/24 network for testing equipment without poisoning my existing network.
I'm having an issue with setting up the virtual hosts on my web server. I have 2 virtual hosts (example1.com, example2.com). example1.com works but example2.com is sent to the index file of example1.com. I did some searching on google and it seems the problem might be with my /etc/hosts file.
First virtual host that the second is also directed to...in sites-available/sites-enabled (note port 80 is blocked by my isp so I use 8080)
Code:
Second virtual host file
Code:
And my hosts file
Code:
# The following lines are desirable for IPv6 capable hosts
Also I'm using a dyndns.org...would that make a difference?
I'm trying to setup an Apache server on my computer which will allow browsing of files in a specific directory and subdirectories, without needing any sort of authentication.
I've got the Apache2 server up and running through yast, and everything works fine as long as I try to point it to the /www/htdocs folder. However, I want to point it at another folder, which is on another partition. This partition is formatted as NTFS, if that matters at all (here's some background on some permissions issues I had with the NTFS partitions recently).
When I change the "Directory" setting in the Yast http server configuration utility to the directory on the NTFS partition I wish to use, attempting to access the server results in the following error:
Code: Access Forbidden: You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster.
Error 403 192.168.1.100 Mon Jun 13 23:43:29 2011 Apache/2.2.17 (Linux/SUSE)
I just installed open Suse 11.3, and I cannot SSH my school. Upon further investigation I could not even ping any machines outside my local area network. Ironically I could nmap machines outside my local area network.
since a few weeks I have a problem connecting to other hosts when I'm using another wireless network, which has a different DNS IP than I have in my network. I have to change /etc/resolv.conf to change the nameserver. Can NetworkManager control the nameserver? If yes, how?
I have my MacBook 5,2 hooked up to an Ubuntu 10.10 'server' via ethernet.
I've installed netatalk and avahi-daemon on the server. I've disallowed Spotlight (OS X's search utility) from indexing the networked drives.
Everything works as it should, however, from time to time when I delete files that reside on the Ubuntu server from my MacBook, Finder (OS X's file manager) will freeze. I can kill the Finder process. I then try to restart it from the Terminal from the command line but I get the following error message printed:
Code: LSOpenURLsWithRole() failed with error -10810 for the file /System/Library/CoreServices/Finder.app.
The only solution I know of at present is to reboot the MacBook.
So I guess I need to know what my options are here. Is it better for me to use SMB? Or is there a config file I can alter? I have a feeling this bug may be linked to certain disallowed characters in file or pathnames as it doesn't happen all the time, although this is just a hunch.
I have my MacBook 5,2 hooked up to an Ubuntu 10.10 'server' via ethernet.I've installed netatalk and avahi-daemon on the server. I've disallowed Spotlight (OS X's search utility) from indexing the networked drives. Everything works as it should, however, from time to time when I delete files that reside on the Ubuntu server from my MacBook, Finder (OS X's file manager) will freeze. I can kill the Finder process. I then try to restart it from the Terminal from the command line but I get the following error message printed:
Code:
LSOpenURLsWithRole() failed with error -10810 for the file /System/Library/CoreServices/Finder.app.
The only solution I know of at present is to reboot the MacBook. So I guess I need to know what my options are here. Is it better for me to use SMB? Or is there a config file I can alter? I have a feeling this bug may be linked to certain disallowed characters in file or pathnames as it doesn't happen all the time, although this is just a hunch.
Can I copy my virtual box VM windows XP virtual-machine files to another Linux computer and run the machine on that computer while I keep on running it on the original computer?
This question is about technical possibilities, not licences.
I got an external enclosure to use an old hard drive as a usb external hard drive for backing up files to.mI tried deleting all the files but there are quite a lot that cannot be deleted Nautilus saying "permission denied".
The harddrive had/has ubuntu 10.4 on it.
Do i need to delete all the files & folders from terminal? I'm running OpenSuse 11.3 in gnome. So i've got Gnome Terminal.
since upgrade to suse 11.3 every time I reboot pc the file /etc/hosts is reset to default value. I am a web developer so I need to put in there my aliases for 127.0.0.1. It is annoying to do it again and again. Luckily, I don't restart my system very often but still I would like to avoid that.What should I do to stop this resetting? Or is there another place in 11.3 where should I put my entries?
I have some settings within hosts file of my Windows Vista. It helps me to bypass some limitation and get online better. I would like to migrate some settings to openSUSE 11.4.Is there anyone who knows how can I tune my openSUSE?FYI, setting of hosts file is lines of <IP Address> <Spaces OR Tabs> <URL OR Alias>
I have a strange issue, when I run my build on my modules. GNU make keeps deleting the source *.c files from all of my modules. It says that they are intermediate files, and deletes them. I have the source files declared as such in the Makefile.