General Information
This is the information for cloud from previous semesters.
Project Goal
Fall 2010 - Spring 2011
At the end of this project, we will have a solid understanding of how different cloud solutions operate and have a written report/presentation of our findings, including a final recommended cloud which we believe we should deploy in the college.
Roadmap
Fall 2010 | Goal |
---|---|
Early Oct | Attempt to find 2-3 viable cloud solutions and get the hardware we need to build out small versions of these clouds. |
Late Oct-Early Nov | Build out each of the cloud systems that we have opted to use in turn. |
Late Nov | Pick our preferred cloud solution. We'd like to scale this solution out if possible (double the number of computers in it). |
December | Present our findings. |
Since Eucalyptus proved to be buggy and unreliable, we have a new list of cloud solutions to test. In theory, we will find a solution that works and is reliable and potentially begin it's full-scale implementation.
Spring 2011 | Goal |
---|---|
February | Get Nimbula to run smoothly (hardware permitting). |
February | Work out (hopefully simple) kinks with OpenQRM |
March (First Half) | Test Ganeti |
March -> End of Year | Begin serious implementation. |
Possible Solutions
- Eucalyptus (using opennebula?) http://www.eucalyptus.com/
- Ganeti
- OpenStack
- OpenQRM
Information on How it works
Eucalyptus:
Basic cloud setup:
http://upload.wikimedia.org/wikipedia/commons/7/79/CloudComputingSampleArchitecture.svg
Eucalyptus clouds are organized into five systems: Cloud-controller, Walrus-Storage, Storage-controller, Cluster-controller, and Node-controller.
The Cloud-controller is the primary control device. It is 1 logical device through which all interfacing with the cloud itself goes. All groups are registered with this device. It interfaces with the walrus-Storage, the cluster-controllers, and the storage-controllers.
The walrus-Storage is 1 logical device where all images for booting VMs are stored. It only directly interfaces with the cloud-controller.
The cluster-controllers are devices which organized nodes into clusters. Each cluster usually has one or more associated storage-controllers. It interfaces with the cloud-controller and with some node-controllers
The storage-controllers are devices that storage data for VMs. The are used for persistency when destroying instances, and for saving space on the RAM-disks of the VMs. It interfaces with the cloud-controller.
The Node-controllers are devices which host the VMs in use. The are organized into clusters via which cluster-controller the are registered with. The interface with one cluster-controller only.
The Cloud-controller, Walrus-storage, Cluster-controller, and Storage-controller can all be one one physical machine.
In the following image the organization of a eucalyptus cloud is discribed. SOAP/Rest is an API for the cloud-controller.
http://www.ibm.com/developerworks/opensource/library/os-cloud-virtual1/index.html
install instuctions for Eucalyptus is found here: https://help.ubuntu.com/community/UEC/CDInstall
we have deviated from these instuctions by installing each component on its own box. we will have
only one Storage Contoller, one Cluster Controller, and two Node Controllers. given our available equipment
there will only be one Node Controller that is capable of vertualizing.
OpenQRM:
user guide:http://www.openqrm.com/?q=node/33
architecture:
Ganeti:
Installation Documentation:
Here is some very thorough docs on creating the LVM on Ubuntu 10.10
https://wiki.ccs.neu.edu/download/attachments/11567826/LVMInstructions.txt
Notes: To get ssh working on a VM, use these commands in the VM:
mkdir /dev/pts
mount -t devpts /dev/pts /dev/pts
Alternatively you can install udev. apt-get install udev
To start a VM:
gnt-instance add -n bedrock -t file -o debootstrap+default -s 10000M -B memory=1024M instance
where instance is the name of the instance that is being created. The instance must be in the hosts file with a FQDN.
Sometimes debootstrap installs in the wrong directory (/usr/local/share or something like that)
Copy it to /srv/ganeti/os
To start console view, use gnt-instance console isntancename
To exit console view, hit ctrl ]