Ubuntu Servers :: How To Redirect Web Access Into Different Web Apps
Sep 30, 2010
I got two web apps in my single linux box, they are Wiki and Mantis the screenshot of the web apps directory shown below.I have created the DNS CNAME record for this server which point to the main server SV6.somehow i got error after i edit it and restart the Apache server.
I have an old FC2 box running Squid version 2.5. It has been running since 2003 so I am in the process of replacing it. I have a new machine with FC11, iptables, and Squid 3.0 installed.
On the old machine I use iptables to intercept Port 80 traffic and send it to Squid. By default I block all internet access and allow only sites that are in an Allowed_Sites.txt file. Within Squid I also have statements to allow certain users to bypass Squid based on their IP address.
I have set up the same thing on the new box. I have iptables intercepting the Port 80 traffic and sending it to Squid. That is working because if I remove the redirect statement from iptables all internet access is blocked.
The problem I am having is that Squid is not blocking any websites. It acts like the ACL is set to http_access allow all. I have worked on this for several hours and am stumped.
I have two servers on my network One with ubuntu 9.10 server And one with openSUSE 11.2.The ubuntu server is my webserver and runs phpsysinfo and my website. On the openSuse server i have a webbased application and some files that i want people to be able to reach by using mydomain.com wich points to my ubuntu server. Is there any way to do this?
I've been having a hard time googling and trying to get ALL network connection to be redirected to squid proxy. I couldn't find a proper configuration for ufw or iptables. The ideas are:
1. redirection rule should NOT depend on a specific network inteface, but should work with any connection type, ex.: ppp0 or eth0... 2. firewall rules can be for firehol, iptables, or ufw (the same as iptables, just tell me where to place them). Preferably ufw or gufw. 3. should not interfere on cups web interface and lighttpd server.
Here's what I'm trying to do to complete my rocking development server.
I would like all outgoing email on my Ubuntu server to be redirected to one email address (internal or external). I don't have any mail server installed yet (I'll probably use postfix unless you have another suggestion).
The reason I would like this to work is because I'm a web developer working on multiple projects. When I start working on a new project I would like to be able to test some of the forms and features in the web application (PHP) without having emails sent to the email address configured in the application. I can always change configurations but having my development server forward the emails would save me lots of trouble.
Example: If one of my php application sends an email to: user1@domain.com, user3@domain4.com... I would like all of them to forward to myemail@domain.com
I'm hosting my own dedicated server with Ubuntu Server 10.10. I have it set up with a static local IP, and I've configured DynDNS to link up with my router and allow my server to go live to the internet. I have all the appropriate ports unlocked, with the exception of port 80. This port is blocked by my ISP (Charter) and I can't use it. Due to this, I configured my router to listen on port 81, and direct it to my server.
So, In order to view it, you need to go to the IP XXX.xxx.XXX.xxx:81 Today, I registered (www.online-self.com) in hopes of getting around my current mask (provided by DynDNS.com (omegame.selfip.com). So here is my dilemma, When I go to the host of my domain name , I want to redirect my DNS to my server IP.
I can't seem to do it though? They want a strict IP address, no port extensions. How do I get around this so that my domain name and IP address link up? I'm thinking I may be missing a step, or maybe I needed to register a domain name that simply redirects? I'm starting to get confused on what I should do next. Can I even do this?
Our company owns multiple tld's for our corporate domain (e.g. company.com, company.net, etc.). Currently, we operate the main website at [URL]. To have "company.net" et al forward/redirect to "company.com", should we use a 301 redirect or setup a ServerAlias in Apache's virtual host directive (we use name-based virtual hosting on Ubuntu Server). Are there any SEO penalties from one approach vs. the other (e.g. Google thinking you have multiple sites with the same contact + flagging it as spam)?
How can I redirect my URL after a site move.I have phpBB forum software installed on a 10.04 server, and I recently moved the forums from mysite.com/forums/ to mysite.com/.
So, a thread that looked like mysite.com/forums/viewtopic=... now looks like
I operate a home network with Ubuntu Server 10.04 with services including DHCP3, Bind9, Apache, and so on. Since I host several dozen websites from home, I have to run Bind DNS. All Ubuntu boxes on my network operate fine. However, all Windows boxes on the network seem to forget to look internally for DNS after a couple of page loads on my internal sites. The network settings still indicate that my internal domain name server is the first lookup and everything seems normal.
In my office i want to setup a Linux machine for public usage , in this machine i want to restrict/deny access to certain applications (ex:- k3b, xterm , pdf reader etc) for certain users/group of users as per the office policies.
1)By what method/procedure i can achieve this objective ?
On my Server I have an application running. I have the External IP address of the Server registered in DNS so users requiring access from outside the office can enter a full URL rather then an IP address.
How to I change my Apache config so that all traffic that comes into the server from the URL is put over https?
My goal is a testing server with an apache virtual host for each site that I'm working on, with fairly painless setup for each new job.For example, I want http://site-a.mydomain to server this document root /home/client-a/site-a/public_html (or something to that effect)Ideally, DNS will use a wildcard to point http://anything-i-type.mydomain to the testing server, and apache will have a dynamic virtual host definition that will do a little magic so that I won't have to mess with DNS records or add a new virtual host each time I add a new site for a client. I'll worry about that when I get there, just put that out there in case anyone has any tips! for now I just have one little problem that's hurting my mood-
It looks like I've got my DNS server working just fine, so yay there- BUT my first attempt at adding a virtual host isn't working quite how I expected- meaning that site-a.mydomain now serves up the correct document root, but when you put http://site-a.mydomain into the browser's address bar, the address bar is then updated to http://10.0.1.100/site-a/public_html - bogus!! I must be missing an option like "FunnyBusiness Off" -
root@ubuntuvm:/etc/apache2/sites-available# vim client-a.mydomain <VirtualHost *:80> UseCanonicalName Off ServerName site-a.mydomain DocumentRoot /home/client-a/site-a/public_html </VirtualHost>2
Is there a way I can redirect messages from kernel ringbuffer to a logfile, e.g. with rsyslogd? With redirect I mean that the messages do no longer appear in dmesg, but only in the logfile. In my case that should be iptables log messages.
I have several web servers running apache on my LAN. Each internal server hosts a number of domains.I would like to make these available to the internet and make sure they all get to use port 80 and 443.My idea was to put Apache on the firewall and have all http(s) traffic from the Internet to my firewall be redirected by Apache to the different internal apache servers. This, in theory, would allow me to keep the standard http(s) port.
Can this be done? I was thinking of mod_rewrite and mod_redirect but in all honesty, I'm a little at a loss on where to start.Can someone point me to relevant documentation or give me the basic idea on how to start?
I'm trying to set up a framework where people connected to same wifi connection can enter a local site for developing purposes.I want them to be thrown to a local copy (development copy of the site) when they type in www.development.loc in the browser.I don't want to connect the world, only people connected to my wifi.Anyone willing to help by stating what I need to edit to :1. Allow me to access local copy of site that's on computer (located : /var/www/developmentsitename/) using www.development.loc in browser./and2a. What do I edit to create the server accessible by other computers connected to same wifi connection. 2b. If another computer can connect to this site now, can we create a virtual desktop setting in which workers can work as if they have their own partition on the server to work on and upload work onto the development server.
I've created a virtual host and when I try to access it it displays the root of the Default Server. Running Fedora 11. This works fine in our Fedora 8, same configuration.
192.168.0.200 Default server is set to Listen 80 virtual server
Just upgraded from v11.0 to v11.2, in fact its still updating the update. I had a number of web apps in folders in my htdocs, whic I cannot access at the moment. They are still there, I just cannot Access them. Has anyone struck the same problem. I know I had this problem about 12 years ago, but being older and senile I cannot remeber what exactly it was. Real important as it contains php scripts which create xml data files for upload to web pages, so would like a quick fix if possible?
I have just downloaded and installed the 32 bit server version on a couple of machines.These machines are remote displays, no keyboards or mice, requiring a kick of from ssh to start the app. Its just bare sever install with TWM, X, and XDM.Problem is they don't seem to read my /etc/X0.hosts file until theres been a keyboard login. After that locally xhost returns the names from the file.I removed the exec /usr/bin/X -nolisten tcp "$@" linefrom /etc/X11/xinit/xserverrc but this also isIn the user .profile I have these 2 lines, DISPLAY='0.0'xhost +This is a real chicken and egg thing, no auto login, no keyboard for login and no ssh X access before a local login. And no easy way to make it auto login with out installing GWM.
I acquired an old ML370 Proliant server, and I'm attempting to find a way to control the fan speed, as all three noisy fans are running at full blast, and I had planned to keep this thing in the office, because it has no wireless capability or support.
I know the following:HP has a suite of programs that are variously called Insight Control Manager, Server Health driver, HP-health, hpasm, and a few more I can't remember. Obviously these are different iterations of the same program, but I have been unable to determine which one I need, and I've been completely unable to find a way to install any of them or find the repository that contains them.
The website: Download Drivers and SoftwareIt lists a lot of different Enterprise-class Server OSes, but nothing about Ubuntu or any home server OSes. I've only been at this for a week, so I don't know which of these would work with Ubuntu Server, or how to make them work if their aptitude file extension is not .deb. I'm currently running Server version 10.10, as 11.04 gave me monitor troubles.
I'm having a problem with my webdav share. I have a secure webdav folder that gets accessed via a non-standard port and requires basic authentication. I can connect and interact with it fine via cadaver. However, when I try to connect from nautilus, it says "Access was denied." To make it even stranger, sometimes I can click on the folder in nautilus (it still mounts) and access it. Sometimes not (just repeats the error message and won't show me the contents). I may not even un-mount it, but just look at other folder, then click it again and be able to access it, but again - only rarely.
I asked a friend to try connecting from his windows vista computer and it would not work. It would not work from my windows XP virtual computer either. However, it mounts and works just find from my work computer (also Windows XP).
So it seems to be a 50/50 chance that the drive will mount on any given computer/system and work. Do anyone know what the problem may be? I'm guessing user permissions, but I can't figure out what.
I've made sure the webdav folder is owned by www-data and www-data has read access to the password file as well.
When I try connecting from nautilus, I get this in the log file:
Code:
Here is one of the (many) sites I've tried looking at: [url]
I am trying to grep multiple numbers from file, grep does have the -f option for that.
Code: grep -f <`seq 500 520` /etc/passwd I know this could be done with
Code: for i in `seq 500 520`; do grep "$i" /etc/passwd; done But my question is fare more behind this example. It is possible to redirect one command output which will be treat as a content of file for another command ?
Background: I need some help regarding install X server/apps on a headless(ie. no monitor, no graphic device) server. The server is in fact a virtual one by Amazon EC2. So there's no monitor nor any graphics hardware. Fedora server (6, i believe) is pre-packaged. The problem is that I want to be running a server app with a GUI. The app won't start without the GUI (and I probably need to tweak a few things through its GUI too).
I plan on setting up very very bare minimal X on the server and then uses NX for remote access. Can somebody shed some light here?
Simply put, my questions are:
1) What would be the minimal list of package i need to install? 2) Where can I find docs about installing and setting up NX? I could only find very fragmented/outdated docs about it.
Some minimized apps no longer appear in the top menu and by that are no longer accessible.For example firefox with the minimize addon or Jungel Disk backup service.How can I reach apps that minimized them self and are not shown in the top menu?