Motivation
There are many configuration management tools -- Chef, Puppet and Juju to name a few -- available for automatically deploying an Openstack cloud. My recent project involved evaluating community developed Chef cookbooks for deploying Openstack in a data center. During the evaluation process, I encountered a number of issues which were not address by any of the open source community projects:
- The development environments used by community do not resemble how Openstack is deployed in production. For example, a typical production environment deploy multiple compute nodes, DNS services, LDAP, clustered databases and message queue. However, all of the development environment widely used by the community only use either an all-in-one or simple two-node configuration.
- Simulating production-like environment with Virtual Box or VMWare is too resource consuming and impractical.
- In a collaborative team environment, we would like to easily build and share the common infrastructure services such as, LDAP, database and message queue across the team. This is rather difficult to achieve using VM images or snapshots.
- Once Openstack is installed, there is no automated way to validate functionality of Openstack services.
- Existing tools do not allow developers to only test the components they are working on. This means even to test a single line of code change, one needs to rebuild an entire cluster, a process that takes a significant toll of development productivity.
This post describes how we addressed the aforementioned issues by building a continuous integration framework using Docker containers.
Why Docker?
Docker containers offer a very lightweight alternative to virtual machines and allow easy deployment and packaging of application/services. Since containers are lightweight, we can have many containers running on a laptop without experiencing significant resource crunches. Second, using Docker registry service (Docker hub), it is easy to build certain components, for example Jenkins server and Tempest, and share the container images with other developers. Using these images developers can easily bring up common infrastructure services on their development environment and focus more on the components they are working on. Third, once a service is installed in a container, we can take a snapshot of the component using Docker's image service. By taking a snapshot or image of a container, we can avoid building certain services, for example MySQL, if there are no code changes for that component.
Deployment model
For our setup, we used a single Vagrant box running Fedora 20 and ran five containers in it. This include:
- Chef server
- Openstack controller
- Openstack compute
- Tempest (for testing Openstack)
- Jenkins
The Vagrant box ran Docker engine and a DNS service. Since Docker randomly assigns IP addresses to its containers, we chose to use a DNS service and registered all Docker containers with this service. We wrote a script to automate hostname registration of all Docker containers with this DNS service. As a result, from one container we could reach any other container by its hostname. For example, one can reach the Chef server from any other containers using its hostname chef.mydomain.com.
The Openstack controller and compute nodes are provisioned using the Chef server. For Openstack controller and compute nodes, we created a custom Docker image which had Chef client pre-installed. This allowed us to easily launch these containers and run a single command to provision them to their desired role.
All Docker images used for this project (Chef server, Chef client, Tempest and Jenkins) are available at the Docker hub (https://hub.docker.com/u/imtiaz/).
The Openstack controller and compute nodes are provisioned using the Chef server. For Openstack controller and compute nodes, we created a custom Docker image which had Chef client pre-installed. This allowed us to easily launch these containers and run a single command to provision them to their desired role.
All Docker images used for this project (Chef server, Chef client, Tempest and Jenkins) are available at the Docker hub (https://hub.docker.com/u/imtiaz/).
Running continuous integration jobs
To allow developers to quickly test their code changes in local environment, we setup a Jenkins server. While running a Jenkins locally seem to be a bit of overkill, it provides the following added benefits over simple automation scripts:
- Nice user interface.
- Central place for storing and viewing logs. One can view any Chef run failures without having to log into the the servers.
- JUnit plugin for Jenkins offers a great way to compare test results and execution time from one build to another.
To bring up the entire CI framework, we perform the following steps:
- Launch Jenkins
- Launch Chef server
- Launch a container with Chef client image
- Provision it to become an Openstack controller
- Repeat steps 3-4 with different Chef roles to create an Openstack compute node.
- Repeat steps 3-4 with different Chef roles to create a Tempest node.
- Run Tempest test cases and publish results.
Demo
Here is a video demo showing how the CI framework works. I apologize in advance for failing to capture the entire screencast.
Challenges
While Docker allowed us to setup a continuous integration framework with only one virtual machine, achieving this was not without pain. In order to build a stable and repeatable environment, I had to troubleshoot many complex issues and at one point even recompiled entire Linux kernel. In my next post, I shall cover the issues I faced to be able to run a fully functional Openstack on Docker.
Thanks for sharing. Keep sharing more and more SEO Online Training
ReplyDeleteJava Online Training
python Online Training
Salesforce Online Training
Tableau Online Training
AWS Online training
Dot Net OnlineTraining
DevOps Online Training
Selenium Online Training