I have read just about every page about the UEC project, and I now know everything about it, except... WHAT IT DOES! I can't seem to find a coherent example of what it does. To the best of my understanding, it runs a virtual machine across multiple servers. Allowing the VM to use the RAM, CPU, and HDD resourced on all servers involved.
I downloaded the latest 10.4 server CD with the intention of running a small Ubuntu Enterprise Cloud. I am following the directions here: [URL] Ive got 2 laptops that are capable of the tasks assigned to them. Both have dual core Intel chips that are VT enabled, 4GB of RAM, 250GB hard drives. Ill use one ( "server" ) as the front end server running cc, clc, walrus and sc.The other one ( "node1" ) will be the only node controller on my little network. Ive also got another laptop as a client, running euca commands to make instances and what not.These three laptops are connected to a switch. server is 192.168.1.100, node1 is going to be 192.168.1.110, the client laptop is 192.168.1.120.
the server seems to install fine, I select Install Ubuntu Enterprise Cloud, use it as the Cluster, give the cluster a name and 10 IPs to assign, 192.168.1.150-192.168.1.160 After the server is done installing and reboots, I boot the node machine off the CD and again select Ubuntu Enterprise Cloud. It's at this point the install craps out, because it does not recognize a cloud computer on the network.
Indeed, as I go to the server I run
ps -aux | grep euca
and see nothing running. So I start the eucalyptus service, and run
sudo euca_conf --list-clusters
and nothing shows up. Ive done some googling, ran some more euca_conf commands, registering the cluster, enabling walrus, cloud and sc. I can access the web gui on the client laptop, then restart the node install on the node laptop. This time it does see the server as a cluster controller, but when it tries to fetch the preseed file, it seems to not know the cluster's IP, as the red box that complains about the lack of a preseed file lists the URL as [URL] ( or whatever the file is called, I dont have the error in front of me. )
what kind of network between cloud controller and nodes, is required for a proper cloud installation? I mean, Does all machines needs to be in same network, in same lan, or may be in MAN or WAN ? how much should be network throughput? 1Mbit/sec , 10Mbit per sec, or 1Gbit/sec? I ask because I need to know the possibility of running nodes on different locations.
So best if you take a quick look at this image, which describes a network topology: [URL]
I am behind a firewall in the university dorm and many ports are banned. Well pretty much everything besides 80, 8080, 110, 21, 22 and the most basic ones. So I'd like to get around that.
I have a home server that is connected and reachable on the internet. So if you type in 18.104.22.168:80 into a browser it's reachable.
The task would be to set up a port forwarding or how you call it in a way that if I access my home server from the dorm or anywhere, it would act as a forwarder and forward that packet or connection to the 22.214.171.124 server on a specific port, say 2083 so that I can even access my hosting Co's admin user interface.
To sum it up: I'd like to access the 126.96.36.199 server from the dorm on port 2083 which is blocked by a firewall, but I have a home server that is reachable on non blocked ports. The home server has no ports blocked.
I get error during install when searching for and trying to add node from the Cloud Controler: New node found on 192.168.1.182:add it? [Yn] y Connecting to 127.0.0:8774...failed: Connection refused. Error: you need to be on the CC host and the CC needs to be running.
We do have a problem on running the images on the cloud server.. how we can use or how we can run the eucalyptus cloud images using elastic fox? or there is another simple way to activate those images provided by eucaplypus... thanx in advance... and were trying to activate the private cloud only..
What's the difference in terms of scalability? We would be hosting videos and FOSS collaboration tools (wiki, forums, etc.) on 4 separate servers. If I install the cloud server, I will need to install the GUI anyways. The servers are all brand new
<embed ..... var="rtmp://site1.my_domain.com" >
how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests.
I have collected a number of computers over the years, and now I would like to put them to good use. I considered UEC, but many do not support hardware virtualization and all I really need is storage. Over all the machines, I estimate that I have 4-5 terabytes of storage, all going to waste because each one has relatively low storage space. Is there any way I could setup a redundant storage solution that utilities these machines in a networked system?
what cloud computing is and i think it can help me with some of my clients i want to switch my clients from a normal ubuntu server to a ubuntu cloud. as of right now i have to send out a bill to them and if they dont pay i have to shut down there service till they pay. what i would like to do is to have a cloud where i can sell them based on what they use not a set price like it is now. and have them be able to pay there bill on the cloud and if they miss the bill then the cloud can shut off there service till its payed.
i dont know if this is possible and i have looked everywhere and all i can find is info on other businesses billing and now how to set up a cloud to do this. i wish there was some kind of tutorial for this. if anyone can direct me to some good notes/tutorials that would be very helpful. this could be a big changing point in my business if i can do this. it would save a lot of time and cash.
Except one all websites are running properly and being redirected to their respective domains. Following is the configuration which I used for each site define on server A a vhost file which contains following
Code: # ProxyPass / http://<Ip of Server> # ProxyPassReverse / http://<Ip of Server>
So if I have 5 websites then I have 5 vhost file on the gateway in above diagram A and in each of those file as above root of site is redirected to internal IP. 4 of them are running properly. The fifth website is running on port 8080:/keyword. So in its vhost file on gateway I defined
Code: # ProxyPass / http://<Ip of Server>:8080/keyword # ProxyPassReverse / http://<Ip of Server>:8080/keyword I can see on Lan http://<Ip of Server>:8080/keyword but when from internet I try to see: http://site5.abc.com I get redirected to a page is https://site5.abc.com:8443/ and it says
Code: The webpage at https://site5.abc.com:8443/ might be temporarily down or it may have moved permanently to a new web address. The site5.abc.com has a requirement to be run at port 8080 internally and it is not a Ubuntu server.(Red Hat based server). While rest all are Ubuntu servers including gateway A.
I am a novice in the world of cloud and recently managed to configure Ubuntu 9.04 Cloud (using kvm, eucalyptus and other packages) successfully at my college for my project work. The problem is that i can only manage to view the running instance using rdesktop from any remote machine. Is there any way to do this other than rdesktop/logs? Secondly, I want to develop a application on the lines of google docs as a part of my project. Is it possible to install apache server on this virtual instance, and host a website? How will the client access this website? Which frameworks would be required or do I have to develop one?
I am in the process of setting up a couple of virtual servers in a cloud environment. I am currently working on my application server (Server 1) and am stuck on the creation of my ruleset for this server.
I need to allow SSH, FTP, HTTP, HTTPS, and PING on this server. This server will also need to be able to talk with a couple of database servers as well as a memcache server (all internally within my cloud environment)
I have been reading on iptables, since I have never messed with them before, and have come up with the ruleset I will paste below. I have taken other steps to secure my server...changing ssh port, not allowing root to login via ssh without logging in as a user, turning off unnecessary daemons, editing my hosts allow/deny files, just to name a few.
I am a newbie to iptables, so I would love a bit of helpful advice, criticism, and even a good explanation why I should add or remove or edit something. I really want to know the how AND the why!
But when you go to 'elliesdev.com' it shows you the EHCP page. I have followed this thread to uninstall EHCP as well as deleting the index.php and its image files from /var/www but I still see the "default web site page" by EHCP. So I'm pretty sure its something with Apache that is causing the site to display that page? What should I do to get ride of it the EHCP default page?I also have a magento ecommerce running at elliesdev.com/magento and I would like to have Apache display it as the default page that you see when you go to elliesdev.com. I have tried changing the DocumentRoot in Apache to /var/www/magento but it makes magento display wrong. How do I make Apache referre to /var/ww/magento as the default for elliesdev.com without displaying wrong?Also, when you go to elliesdev.com/magento, the URL will turn into IP-ADDRESS/magento. How do you make Apache keep my domain name in the URL and not change it to my IP address?I have a feeling that what I am doing is correct, but there is some config file from EHCP that I have not deleted yet that is causing all of this.
Im trying to setup a radius server to use WPA2-Enterprise on a linksys wireless router. I have so far done the following from this link: [URL]... Im having trouble understanding/finding information on how to configure the configuration files so my radius server will work when somebody tries to authenticate.
Having been a long-time Linux supporter, it was a no-brainer for me to push Linux as the OS when it was time to move my company's website(s) from an external host to an internal server. My company ponied up the bucks for an HP Proliant DL180 G5 server (this was about a year ago) upon which I installed Slackware 12.1 (I kinda default to Slackware, although in the past I've also used RedHat on the server and openSUSE on the desktop).
Well, it's now time to upgrade the OS on the server. Perhaps I'm a little late to the game on this, but I just recently found out about Oracle Enterprise Linux 5.5. Bad Oracle ju-ju aside, it looked to be a very good and stable server distro, especially when you consider the availability of Oracle support for it. So I've since installed OEL to a non-production server and have been testing it out.It looks to be a good match for my needs.
However, since installing OEL on my test server I've installed Ubuntu Desktop 10.10 on my home desktop (a 1.86ghz Intel Core 2 Duo HP xw4400 workstation) and in a VirtualBox VM on my work desktop (a 3.2ghz Intel Core i3 iMac) and... holy crap, it's the best desktop Linux I've ever had the pleasure of using. It will certainly replace Slackware as my default "go-to" Linux distro. Which has made me wonder about the Ubuntu Server offering and if it might not be a better fit for my webserver in the same way that Ubuntu Desktop is a better fit for my workstation.
Unfortunately, I don't have the time to pull down another server distro, install it, and test it unless I'm reasonably certain that it's going to give OEL a run for its money. So I turn to the forums and the knowledge of the users who frequent them: is Ubuntu Server a strong enough contender to OEL that I should take a more serious look at it? Or, like I'm hearing about the latest Ubuntu Netbook edition, would it not be worth my time if I'm happy with OEL so far?
I have (had) most of my pictures backed up on Ubuntu One. I have several Computers and everything was in sync. The first time took weeks as I have around 20GB of pictures and a mediocre internet connection. So I installed Ubuntu 11 on one PC, and as I wanted to repartition things, I took a copy of the Ubuntu One folder, then reinstalled from fresh. When it was done, I copied the Ubuntu One folder back. Ubuntu One didn't seem to like this, and has basically decided to upload everything again, but my used space is not showing up as 1.8GB. Where have my online files gone?
I have a number of questions related to Ubuntu and private cloud: I would be working on cloud computing for my final year project soonest. 1. What are the things to be done after installing ubuntu to deploy a private cloud. 2. My computer is running 32bit Intel CPU, is there anyway I could possibly install 64bit Ubuntu/Fedora OS on VM installed on this computer? 3. what are the other software applications that I need to install on ubuntu to enable me create a private cloud.
In our school I was thinking of moving to virtual desktops for the students, that would revert to a pure state upon every log out. No changes preserved after a session. We use Google docs for office, so that works. But where can we store stuff like Inkscape drawings?As far as I can tell Google storage can't be reliably mounted yet on Ubuntufor a good cloud storage option that we can quickly log into that will give us a folder for storage online?
I have read several articles online trying to get a grasp and understand on a simple question. What is main difference and functions of cloud server infrastructure and cluster server infrastructure
To my understanding, the the basic setup is sort of identical in a way. They both have a master server (3 GHz, 2GB RAM 50GB HDD), from there it connectes to a switch, then a number of nodes (each 2 GHZ, 4 GB RAM, 100GB HDD) connect to the switch.
The cluster combines node resources so it looks like one computer (5 nodes = 2GHZ x 5, 4GB x 5, 100GB x 5 = 10GHZ, 20GB RAM, 500GB HDD). If this is correct, this sounds good for possibly Database or File server. The resources on cluster would be constant. Example would be Database Server. While running DB server it starts to slow down because of all the data it is trying to store and retreive. If you ned more resources, just add another node.
The cloud uses only resources it needs to run (same numbers as cluster). This is good for Database or Web Server. Example: Web server is hosting 10 domains each with 20 pages. One hour it uses 3 GHZ and 8 GB RAM, next hour it is getting heavy traffic so it uses 9 GHZ, 15GB RAM.
But from what I have gathered and read, this is how I am interpreting the information and understaing it. Plain and simple.