Ubuntu Servers :: How To Decide Which Virtualization Platform To Use
Mar 12, 2010
I want to run 4 instances of a Virtual Machine on a Dell Power Edge 7401 machine want to know which Virtualization platform will help is there any performance compatibility test some where so that I can understand it.
I am reading 'Deploying Rails Applications' and want to create a Rails clustered deployment setup as described in page 141: Host: Ubuntu server Guests: 2 Ubuntu vms, each with Nginx on port 80, a few mongrels (app server for rails) and MySQL What open source virtualization software do you recommend for this purpose? I am trying virtualbox but heard good things about KVM. I have some questions regarding vbox and posted them on vbox forums - [URL].
There are lots of technologies which are used for virtuaizing such as OpenVZ, Xen, VMWare, KVM.
I tried all except KVM. Is there any web gui control pane which can be used for KVM virtualization ? Like, HyperVM is used for openvz and xen, vmware have their own control panel web gui
Being on a low budget and I can't afford to buy Redhat would you recommend using Fedora for setting up dedicated servers? I know Fedora is known to be "bleeding edge" in technology, which concerns me with the stability of the server. Would you recommend a more stable Linux distro? I was also wondering if there is any way to know what these web hosting companies are using in their servers: [url]
I am leading a project at work that will require at least one new server. There will be a development server and a production server, which changes from development will be rolled onto. Unfortunately, I am more of a web programmer than a Linux guru, and I really don't know whether or not it is better to have two physical servers or two virtual servers on one machine. I don't think there will be a huge toll on the machine, as there will probably be around 1000 total users and less than 100 on at any given time. We are also able to spend quite a bit on the server, so I'm sure one could handle it, but I just wanted to check and see what the advantages and disadvantages would be in this situation.
I have set up UEC and have installed the store images. I have seen that we can create and run instances which are similar to virtual machines. We can utilize virtualization and create virtual machines and thereby fully utilize the server. Not sure what extra benefits or features can be achieved using cloud (say UEC). I suppose I am missing something. Kindly let me know how cloud adds more value than server virtualization.
I've got a small web hosting service, for some of my friends. Where I want my friends to be able to have dedicated virtual servers just for a simple LAMP and ftp-server.
What I would want to know is how to create the virtual servers in an truly optimal way. Should I use KVM or VirtualBox? How can I make the request to mydomain.org go to one of the virtual servers and the request to anotherdomain.com go to another virtual server?
If the virtual server is the best solution. * How much memory should be sufficient for a simple LAMP/FTP-server? * What would be the most optimal settings for my limited hardware capacity?
I am fairly new to ubuntu server, I want to setup a tftp server to mount a new kernel in a DaVinci platform. I was following the guide in this page [URL]..docs/linux_tftp, but accidentally I remove the xinetd.conf file. So I think that maybe removing and reinstalling xinetd the problem will be solved, but instead of that I can't completely remove xinetd and the follow message is print in the terminal
Removing xinetd ... invoke-rc.d: unknown initscript, /etc/init.d/xinetd not found. dpkg: error processing xinetd (--remove): subprocess installed pre-removal script returned error exit status 100 invoke-rc.d: unknown initscript, /etc/init.d/xinetd not found. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 100 Errors were encountered while processing: xinetd E: Sub-process /usr/bin/dpkg returned an error code (1)
with that problem I can't start the service or stop it, I am blocked in the configuration of tftp server.
I'm running Ubuntu Server 9.10 on a dual socket X5550 platform with hyperthreading disabled. There should be 8 cores available for running processes: dual-socket, quad-core. When I start 8 single-threaded, cpu-intensive processes running, top shows six processes running at 100% and two at 50% (presumably sharing the seventh core). No other processes are reported as having >1% cpu, and yet the 1-minute load average is listed as over 11.0?!
1) Why is it not running each process on its own core? 2) Why is the load average so much greater than 8? 3) Is there anything I can change in the configuration to fix this?
I've had Ubuntu (8.10) on my netbook in the past and I really liked it. I'm currently running Fedora and feeling like I should "change it up" again. I've played around with Ubuntu 10.04 Lucid a little, and so far I'm very impressed. I've always wanted to try Arch, but I'm worried I won't have the driver support I need for all the non-standard hardware in a netbook.
Does anybody have a suggestion for a new distro to try? I'm preferably looking for something feature-rich over light-weight, and something that I can have up and running with a minimum of configuration (at least partially working).
I've been trying out various variants of Ubuntu (Ubuntu, Kubuntu, Xubuntu, Linux Mint, Ultimate Ubuntu, to be specific) as well as the latest Fedora. The only thing that I can distinguish between the various distributions is the desktop environment that it uses (but some distributions, like Fedora, have multiple versions) and the software packages it comes with. But sofware can always be installed afterwards, and so can desktop environments, so what varies between the various distribution branches on a deeper level, on the things that the newbie user like me can't directly see? And is there any easy way to compile my own version of Linux?
In Ubuntu 10.04 LTS, I have downloaded and installed texlive (2011). They have issued the following warnings:
1. "To the best of our knowledge, the core TEX programs themselves are (and always have been) extremely robust. However, the contributed programs in TEX Live may not reach the same level, despite everyone�s best efforts. As always, you should be careful when running programs on untrusted input; for maximum safety, use a new subdirectory."
What does this exactly mean? The installed program has already created own directories and subdirectories (e.g. /usr/local/texlive/2011/bin/i386-linux). Am I supposed to create a new subdirectory in home to write files and run latex program? Exactly how do I know that the downloaded and installed program is not malicious?
2. "Finally, TEX (and its companion programs) are able to write files when processing documents, a feature that can also be abused in a wide variety of ways. Again, processing unknown documents in a new subdirectory is the safest bet."
what is implied by "a feature that can also be abused in a wide variety of ways".
I checked my kernel version (uname -r) and see I'm on "2.6.34.8-0.2-default", and I noticed that they just released 2.6.39. I'm assuming (perhaps incorrectly) that there's been at least versions 2.6.35/6/7/8 released in there. Why isn't my openSUSE 11.3 using anything more recent than .34? How does this updating work? Is 11.4 on a more recent one?
How does Compiz automatically decide which windows should be sticky (i.e. should be visible on all workspaces)? Windows such as gnome-panel and cairo-dock always stay on the visible workspace, without requiring additional configuration. How does Compiz figure this out?
I have no RAID experience on Linux, so I've found dozen of information on the net about software raids, hardware raids and fake raids. Now situation is not clear to me at all. I'm considering to buy one of these cards to run on my Ubuntu server (currently 9.04, but will be upgraded to 10.10): 1. Promise FastTrak TX2300. I believe this is fake raid as it has some RAID bios. It handles SATA II cards and has PCI interface (what is important to me because I don't have PCI-X or PCI-e). 2. Promise SATA 300 TX2 Plus. I believe this could be a software raid because it has no built in raid support at all.
So I don't need to install my system on future raid system I just want to add those disks as storage mirror to my existing system. So what card is better (I believe both are supported on current ubuntu)? Is card better with built in raid which has some settings in BIOS ? What will be setup of the card? I mean should I use any BIOS RAID options or I should disable BIOS raid and use linux dmraid? In that case maybe better choice is card without any RAID in bios ? Sorry if question is too beginner, but I'm lost with all the information. The main thing I want to know if I should use BIOS raid featured to use fakeraid or I should disable it anyway.
After I installed Linux OS(for example:SuSE10,redhat5),the [root] parmeter of [kernel] in created grub.conf seems that sometimes it's defined to device name.sometimes it's defined to Label or sometimes UUID. So ,I want to know what is that relative to? Hard disk type or OS version or both?
I am only getting 4.7kb/s, dispite there being 31 or so Seeders. The port is just opening and closing it seems, I have no idea why though.The port was opened both with firestarter (which isn't supposed to be firewalling ATM) and "sudo iptables -A INPUT -p tcp --dport 6884 -j ACCEPT".It was also opened under the 'Application Sharing' menu of my router.
I was puzzling over the nss dependence problem that people are having in upgrading 5.2 to 5.3. The issue is clear (that the mirror people are using for [updates] is not pointing to the latest set when an uptodate mirror may be being used for your [base]). My question is though how the mirror list decides whether a mirror is fresh. According to [URL] then this mirror [URL] is "green" which presumably means that the system thinks it is up to date (last probe was 1 hour ago). However if you look at the files on the mirror now (12:54 BST, Apr 1 2009) you see that the date of the 5/ branch is 24-Jun-2008. Thus this host is not ready to give you updates to 5.3. Is this a bug?
I have read a couple of articles on how dynamic linking works (those stuff about got, plt and lazy binding), and I am still not sure why you need to do dynamic linking in such a complicated way.Suppose your program uses a function in a shared library that needs to be linked dynamically at run time (like a printf). Why can't you statically decide the virtual address of the function at compile time? After all, all you need to do is to enter the page table entry corresponding to the address of the function if the library has been already loaded to a physical page frame.
In my college many proxy : port (like 144.16.192.245:8080)are using to get Internet connection, performance of each proxy changes, how can i decide which one is working well at particular time. is there any way to switch over them automatically?
i was trying to install mac snow leo in virtualbox but when i searched for wut i need to have i found this link when i entered the code it returned nothing so apparently i dont have it,im asking if theres a way i could get it or just get mac work in virtualbox?
I would like to try putting some kind of free "bare metal" visualization for desktop useage on my laptop. I've been googling about the possibilities, but still I'm not sure which would actually work in my case. I've seen VMWare ESXi which looks ok, but unfortunatelly it is meant for servers and I can't have ESXI and Sphere client on same laptop. Another candidate I found is KVM, but as much I've seen it requires VTx VTd support from hardware, which my laptop can't provide. The same requirements must be met for Citrix Xen Client, which is meant for desktop virtualizations, but because of lack of VTx and VTd, can't be used in my case. Is there any other possibility? Currently I'm using VirtualBox and VMWare player for virtualization purposes, but I would like to pull out more performance out of it, and a heavy OS on top of another heavy OS just isn't the best way.
I messed up my kvm virtualization, I think. I want to reload it. I think I'm supposed to type yum install aptName.Is that right?Also, I don't know what to put in for aptName. I can't find the package names on the Fedora site. Where did they hide it? Also, where do I download apts. I searched with Google and find all kinds of sites I don't trust.
I would like to run Fedora 12 in virtualization mode on my PC. Primary OS is Win 7 Pro X64 Can I use the Windows Virtual PC for this or a third party VM software is better. Any recommendations? Which version of Fedora 12 should I use ? Can I use the Live CD version ?
I would like to know or to retrieve a guide with all the steps to follow to do a hardware virtualization in a 1.5 TB-HDD in which i want to have 2 VM with VT-d (Hardware virtualization) one VM with Opensuse 11.4 x64 and the other with Windows 7 x64. The fact is that I want to run both VM's at same time and I need aid and tips with Xen 4.1 because all the Tutorials and guides that I've seen are from non updated versions (3.0, 2.2...). Do I need 3 partitions to virtualize by hardware (one for the host and the other 2 for the VM's)?? Which host OS do you recommend me?
Is it hard to configure Xen and anyway, how to do it? Basically I'm a novice use of linux and my doubts come up refering to Xen configuration, installation and Hardware virtualization selection ( Graphic Card Virtualization above all) because if I have to touch the BIOS settings tell me how, or the application sets it by itself? What I want to virtualize is a 7 Professional 64bits with most of the Disk space (at least 1 TB)for gaming&multimedia, and Opensuse 11.4 64bits. I'll use suse to work & test software.
Motherboard Gigabyte GA-H67A-UD3H Processor: Intel Socket 1155 - Intel Core I5 2400 3.1Ghz OpenSuse 11.4 (default DVD kernel version) RAM memory Kingston ValueRAM 8 GB DDR3 1333 in modules of 2. Hard drive 3.5 SATA - Samsung HD154UI 1.5TB SATA2 32MB MAESTRO.
I'm not searching for software virtualization/ full virtualization / paravirtualization. I'm searching for vt-x, vt-d & Graphic card optimizing with windows emulated environment. And do you recommend me to use another Xen version or software to achieve this goal?
I am using Ubuntu 11.04, I also like Fedora, Opensuse and Mandriva. How can I run these Linuxes in Ubuntu with Virtualization? Which virtualization software is the best? Can I download it?
i'm a familiar with Linux environement ( fedora 10 user ) and i got a project in a training where i have to create a cluster with two nodes where i have to set up a number of VMs that will run applications such as ( Samba, Ldap, Zimbra, ...) but i don't know how to virtualize on top of a cluster ! i would like to know how that can be done, and how is it possible to let the VMs get ressources ( RAM & CPU ) from the two nodes ??
I chose virtualization when installing Centos 5.3. The kernel I got is 2.6.18-128.el5xen My plan is to use KVM, I disable xend service. I don't need a xen enabled kernel. How do I update the kernel to a non xen one?
i installed ubuntu using wubi and so far i have been impressed. I quickly filled up the small size i allocated for the ubuntu installation in wubi and now find myself in quite a predicament. I was thinking of performing a clean install of ubuntu and removing the existing windows installation. Rather than dual booting, i was using virtual box to run a windows xp machine so that i could use common windows applications. However, i was having some problems running some applications in the virtual box.
it was possible to install ubuntu and windows on the same hardrive with the ability to boot into either/or, but also be able to run the same windows installation inside a virtualization program in ubuntu? The majority of the windows programs worked fine in the virtual box, but some of the applications didnt. Is there any software out there that can do this?