how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests.
what kind of network between cloud controller and nodes, is required for a proper cloud installation? I mean, Does all machines needs to be in same network, in same lan, or may be in MAN or WAN ? how much should be network throughput? 1Mbit/sec , 10Mbit per sec, or 1Gbit/sec? I ask because I need to know the possibility of running nodes on different locations.
What's the difference in terms of scalability? We would be hosting videos and FOSS collaboration tools (wiki, forums, etc.) on 4 separate servers. If I install the cloud server, I will need to install the GUI anyways. The servers are all brand new
what cloud computing is and i think it can help me with some of my clients i want to switch my clients from a normal ubuntu server to a ubuntu cloud. as of right now i have to send out a bill to them and if they dont pay i have to shut down there service till they pay. what i would like to do is to have a cloud where i can sell them based on what they use not a set price like it is now. and have them be able to pay there bill on the cloud and if they miss the bill then the cloud can shut off there service till its payed.
i dont know if this is possible and i have looked everywhere and all i can find is info on other businesses billing and now how to set up a cloud to do this. i wish there was some kind of tutorial for this. if anyone can direct me to some good notes/tutorials that would be very helpful. this could be a big changing point in my business if i can do this. it would save a lot of time and cash.
I am in the process of setting up a couple of virtual servers in a cloud environment. I am currently working on my application server (Server 1) and am stuck on the creation of my ruleset for this server.
I need to allow SSH, FTP, HTTP, HTTPS, and PING on this server. This server will also need to be able to talk with a couple of database servers as well as a memcache server (all internally within my cloud environment)
I have been reading on iptables, since I have never messed with them before, and have come up with the ruleset I will paste below. I have taken other steps to secure my server...changing ssh port, not allowing root to login via ssh without logging in as a user, turning off unnecessary daemons, editing my hosts allow/deny files, just to name a few.
I am a newbie to iptables, so I would love a bit of helpful advice, criticism, and even a good explanation why I should add or remove or edit something. I really want to know the how AND the why!
I'm looking at setting up a couple automated systems: Here are a few examples:
* Internal accounting system to download and process emails * Public web server to visit
I could put each system on its own separate box -- for example, it's generally good practice to separate anything that external users have access to (such as a webserver) from internal processes such as accounting. Now, rather than dishing out the money for two separate servers, could I get away with just installing new instances of VMWare on the same box for each system?
To give you an idea, these are not large scale computationally sensitive systems. The accounting one is simply downloading and tallying emails, and the latter is just a webserver with maybe 5 hits per day on a good day. I could definitely pick up a new box for say $50, but I wanted to know the general practice of using VMWare on the same box versus two separate boxes.
I successfully installed darwin streaming server .. I stream Audio through internet well but videos I can stream locally in my network only .. when I am connected to internet outside my network .. it doesn't stream I think their must be ports opened for that .. or any 1 have any ideas .. the audio is streamed on port 8000 .. video is streamed on port 7070 but locally only .. I opened those 2 ports in my router only the audio is working .. also I opened ports 554,7170 disabled the firewall of the router .. is it a problem of ports or something else .
So best if you take a quick look at this image, which describes a network topology: [URL]
I am behind a firewall in the university dorm and many ports are banned. Well pretty much everything besides 80, 8080, 110, 21, 22 and the most basic ones. So I'd like to get around that.
I have a home server that is connected and reachable on the internet. So if you type in 22.214.171.124:80 into a browser it's reachable.
The task would be to set up a port forwarding or how you call it in a way that if I access my home server from the dorm or anywhere, it would act as a forwarder and forward that packet or connection to the 126.96.36.199 server on a specific port, say 2083 so that I can even access my hosting Co's admin user interface.
To sum it up: I'd like to access the 188.8.131.52 server from the dorm on port 2083 which is blocked by a firewall, but I have a home server that is reachable on non blocked ports. The home server has no ports blocked.
I have read just about every page about the UEC project, and I now know everything about it, except... WHAT IT DOES! I can't seem to find a coherent example of what it does. To the best of my understanding, it runs a virtual machine across multiple servers. Allowing the VM to use the RAM, CPU, and HDD resourced on all servers involved.
I'm curious if anybody can shed some light for me in this department. We're in a large environment with a Windows DHCP Server. We have been tinkering with LTSP on Edubuntu as thin and fat clients. It works great, but right now we just have 1 server handling the lab, which works fine unless we want to expand, which may be very possible.
These are the instructions I received: Login to your windows server and load the DHCP configuration screen Create a DHCP reservation for the MAC address you obtained Add the configuration options below to enable the machine to boot from the LTSP server 017 Root Path: /opt/ltsp/i386 066 Boot Server Host Name: <ip address> 067 Bootfile Name: ltsp/arch/pxelinux.0 # Specify CPU architecture in place of 'arch', for instance 'i386'
I'm curious, what if I want to have multiple Ubuntu servers on the network that I want to have bootable? For example, let's say I have 3 labs, and 3 servers. Server A to Lab A, Server B to Lab B, and Server C to Lab C. I want all C's computers to boot to C, and B to B, A to A, etc.
1 - How would I add multiple entries on the Windows DHCP Server to allow all 3 (A B C) servers to boot?
2 - How would I be able to isolate the clients so ONLY Lab A clients boot to Server A, etc?
I get error during install when searching for and trying to add node from the Cloud Controler: New node found on 192.168.1.182:add it? [Yn] y Connecting to 127.0.0:8774...failed: Connection refused. Error: you need to be on the CC host and the CC needs to be running.
We do have a problem on running the images on the cloud server.. how we can use or how we can run the eucalyptus cloud images using elastic fox? or there is another simple way to activate those images provided by eucaplypus... thanx in advance... and were trying to activate the private cloud only..
I have collected a number of computers over the years, and now I would like to put them to good use. I considered UEC, but many do not support hardware virtualization and all I really need is storage. Over all the machines, I estimate that I have 4-5 terabytes of storage, all going to waste because each one has relatively low storage space. Is there any way I could setup a redundant storage solution that utilities these machines in a networked system?
I downloaded the latest 10.4 server CD with the intention of running a small Ubuntu Enterprise Cloud. I am following the directions here: [URL] Ive got 2 laptops that are capable of the tasks assigned to them. Both have dual core Intel chips that are VT enabled, 4GB of RAM, 250GB hard drives. Ill use one ( "server" ) as the front end server running cc, clc, walrus and sc.The other one ( "node1" ) will be the only node controller on my little network. Ive also got another laptop as a client, running euca commands to make instances and what not.These three laptops are connected to a switch. server is 192.168.1.100, node1 is going to be 192.168.1.110, the client laptop is 192.168.1.120.
the server seems to install fine, I select Install Ubuntu Enterprise Cloud, use it as the Cluster, give the cluster a name and 10 IPs to assign, 192.168.1.150-192.168.1.160 After the server is done installing and reboots, I boot the node machine off the CD and again select Ubuntu Enterprise Cloud. It's at this point the install craps out, because it does not recognize a cloud computer on the network.
Indeed, as I go to the server I run
ps -aux | grep euca
and see nothing running. So I start the eucalyptus service, and run
sudo euca_conf --list-clusters
and nothing shows up. Ive done some googling, ran some more euca_conf commands, registering the cluster, enabling walrus, cloud and sc. I can access the web gui on the client laptop, then restart the node install on the node laptop. This time it does see the server as a cluster controller, but when it tries to fetch the preseed file, it seems to not know the cluster's IP, as the red box that complains about the lack of a preseed file lists the URL as [URL] ( or whatever the file is called, I dont have the error in front of me. )
I have installed the Apache, PHP and MYSQL in the rackspace cloud server environment. Can anyone please guide me How can I configure email server in that with postfix or some other with multiple domain.
Except one all websites are running properly and being redirected to their respective domains. Following is the configuration which I used for each site define on server A a vhost file which contains following
Code: # ProxyPass / http://<Ip of Server> # ProxyPassReverse / http://<Ip of Server>
So if I have 5 websites then I have 5 vhost file on the gateway in above diagram A and in each of those file as above root of site is redirected to internal IP. 4 of them are running properly. The fifth website is running on port 8080:/keyword. So in its vhost file on gateway I defined
Code: # ProxyPass / http://<Ip of Server>:8080/keyword # ProxyPassReverse / http://<Ip of Server>:8080/keyword I can see on Lan http://<Ip of Server>:8080/keyword but when from internet I try to see: http://site5.abc.com I get redirected to a page is https://site5.abc.com:8443/ and it says
Code: The webpage at https://site5.abc.com:8443/ might be temporarily down or it may have moved permanently to a new web address. The site5.abc.com has a requirement to be run at port 8080 internally and it is not a Ubuntu server.(Red Hat based server). While rest all are Ubuntu servers including gateway A.
I am a novice in the world of cloud and recently managed to configure Ubuntu 9.04 Cloud (using kvm, eucalyptus and other packages) successfully at my college for my project work. The problem is that i can only manage to view the running instance using rdesktop from any remote machine. Is there any way to do this other than rdesktop/logs? Secondly, I want to develop a application on the lines of google docs as a part of my project. Is it possible to install apache server on this virtual instance, and host a website? How will the client access this website? Which frameworks would be required or do I have to develop one?
I am currently working on managing multiple linux servers in remote locations, servers particularly user for web hosting. I need to backup data to a backup server but rsync which i currently using doesnt helps is there any tool to backup every server with out modifying it bcos there are hundreds of servers so installing a tool in every server is time consuming process.
by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch.
To be short and to the point, I want to setup a service with my clients and allow them to back up to one of my servers. What I would like to do is allow client connections via WinScp to a ssh server that has home directories for each user to backup data. If users want their data I would like them to be able to connect to my apache web server (same as ssh) and download from anywhere.
Is there a way for apache to link web directories on the server to the actual /home/user account and use the same login information/authentication?? Does this make sense? I really appreciate the help. I am not a developer or else I would simply develop a user friendly front end for to an ssh server. Since that is the case I think this is the best solution for me, as well as the easiest for the client.
I'm curious how I can use a Windows client with two separate accounts to connect, at the same time, to a SMB server hosting two shares. (Provided permissions and accounts are all in order)
Scenario:User1 is always logged onto a Windows client mapped to a Public share on a Linux SMB server.I need a way to keep User1 connected to the Public share and then when needed, allow User1 to provide User2's credentials to connect to a Restricted share.The only way I've been able to do this is to disconnect from the Public Share then reconnect to the Restricted share using User2's credentials. (This is the issue because I need to keep User1 connected to the Public share).Is this a limitation of SMB? Or am I missing a configuration? Please point me in the right direction
create an Apache web server with multiple user accounts, for work. Each user needs to be able to upload his/her files via SSH Each user needs his/her own web directory, (preferably in their home directory for ease with permissions) There web directories need to be password protected Only one user account (mine) should be allowed remote SSH control. It needs to be easy to add new users to the system.
I hope this post is in the right area of the forum.I am searching for the right operating system and application(s) to build a cloud data storage server business like Backblaze.Backblaze uses Debian but uses their own custom application to manage the data, uploads, accounts, encryption, etc... So my real question here is: does anyone have any recommendations for existing application(s) I could use on top of Debian to handle this stuff?
I am trying to record streaming flash video to multiple files each limiting upto 10 mins.Thinking of two possible ways to script,1) saving the files with 10 minute time limit on the fly as I record2) record the whole video as single file and split into multiple files with time limit.