russell.cardullo

Setting up a Multi-VM environment in Vagrant

Sep 08 2012

Running a complete development environment (web servers, database, etc) has typically meant installing each of these on the same box. But this doesn’t accurately model how these get deployed into production where each service runs on a separate box, often with different dependencies. Vagrant provides an easy way to setup a group of development VMs that are all related in some way.

To demonstrate this, I’ll setup a simple environment containing

  • Two node.js application servers
  • An nginx load balancer

I’m going to use a trivial example of a node application from their homepage. But the basic principle is the same with other applications.

Besides setting up all the Vagrant stuff I’m also going to want some way of automatically configuring the VMs. For that I’ll use Chef.

Single node application server

To start off let’s create a single node.js application server running our app. First, create a cookbook for our node application:

knife cookbook create node app

This will create a default cookbook structure for me to work with. I’ll add my node application as a file resource:

vi cookbooks/nodeapp/files/default/app.js
var http = require('http');
var os   = require('os');
var hostname = os.hostname();

http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello from ' + hostname + '\n');
}).listen(1337);
console.log('Server running at http://127.0.0.1:1337/');

And reference this in the default recipe:

vi chef/cookbooks/nodeapp/recipes/default.rb 
include_recipe 'node'

cookbook_file '/usr/local/bin/app.js' do
  action :create
  source 'app.js'
end

node_server 'nodeapp' do
  script '/usr/local/bin/app.js'
  action :start
end

This uses the ‘node’ cookbook which will take care of installing node, npm, and allow you to setup an app to start on boot

I found that the node cookbook obtained via knife cookbook site install node as of 9/8/2012 has a few issues that prevent it from working on my Ubuntu VM. I’ve created a fork here that works around these issues.

Besides the node cookbook we’ll need to satisfy some other dependencies:

knife cookbook site install apt
knife cookbook site install build-essential
knife cookbook site install git
knife cookbook site install ntp
knife cookbook site install ubuntu

So altogether you should have the following cookbooks:

$ ls chef/cookbooks/
apt
build-essential
git
node
nodeapp
ntp
ubuntu

To use this in a Vagrantbox we’re going to include this recipe in an ‘application_server’ role:

vi chef/roles/application_server.rb
name "application_server"
description "node.js application server"
run_list "recipe[ntp]", "recipe[ubuntu]", "recipe[nodeapp]"

Before we get too far along let’s make sure we can setup a single application server running our node app.

Create a simple Vagrantfile that uses the application_server role we created above. We’ll also need to configure port forwarding of port 1337:

Vagrant::Config.run do |config|
  config.vm.box = "precise32"
  config.vm.forward_port 1337, 1337
  config.vm.provision :chef_solo do |chef|
    chef.cookbooks_path = "chef/cookbooks"
    chef.roles_path = "chef/roles"
    chef.data_bags_path = "chef/data_bags"
    chef.add_role "application_server"
  end
end

Run vagrant up and in a few minutes you’re VM should be ready to use. The recipe is setup to build node.js from source so this could take a couple of minutes. If you got any errors running vagrant check the output to figure out what went wrong.

Once this is up you should be able to connect from the host machine to localhost:1337 and get a response.

$ curl localhost:1337
Hello from precise32

Multiple application servers

So far so good, but now let’s extend this to create multiple application servers. To do so we’ll need to change the Vagrantfile to setup multiple VMs, one for each application server we want to run.

Let’s also set the hostnames for these so we can distinguish between them, as well as configure host only networking with static ip addresses.

Since the Vagrantfile is written in Ruby I can define these values in a hash to avoid duplicating config settings for each VM I need to setup:

Vagrant::Config.run do |config|
  app_servers = { :app1 => '192.168.1.44',
                  :app2 => '192.168.1.45'
                }

  app_servers.each do |app_server_name, app_server_ip|
    config.vm.define app_server_name do |app_config|
      app_config.vm.box = "precise32"
      app_config.vm.host_name = app_server_name.to_s
      app_config.vm.network :hostonly, app_server_ip
      app_config.vm.provision :chef_solo do |chef|
        chef.cookbooks_path = "chef/cookbooks"
        chef.roles_path = "chef/roles"
        chef.data_bags_path = "chef/data_bags"
        chef.add_role "application_server"
      end
    end
  end
end

Now when you run vagrant up both VMs will start up. If you still have VMs running from earlier you may need to vm destroy them. If they’re up you can test each box individually like so:

$ curl 192.168.1.44:1337
Hello from app1
$ curl 192.168.1.45:1337
Hello from app2

Since we switched to use host only networking we no longer need to worry about forwarding individual ports since all ports are open. This may not match what we’d see in production in which case we’d want to setup some firewall rules to only allow traffic through specific ports. But for now we’ll just proceed with all ports open.

Load balancer

Having multiple application servers is great but we still need to setup a load balancer between the them. There are multiple choices we could use for this but let’s go with nginx.

We’ll need to grab some more Chef cookbooks:

knife cookbook site install nginx
knife cookbook site install bluepill
knife cookbook site install runit
knife cookbook site install yum
knife cookbook site install ohai

And create a new cookbook containing our configuration for the load balancer:

knife cookbook create loadbalancer

Edit the default recipe in the loadbalancer cookbook so that we require nginx as well as create the default config file. We also setup to restart the nginx resource after changing the config file:

vi chef/cookbooks/loadbalancer/recipes/default.rb 
require_recipe "nginx"

template '/etc/nginx/sites-available/default' do
  source 'loadbalancer.conf.erb'
  variables({
    :upstream_servers => node[:loadbalancer][:upstream_servers]
  })
  notifies :restart, resources(:service => "nginx")
end

Put the default.conf file in the templates/default directory. The IP addresses of the upstream app servers will be come from attributes that we define in our Vagrantfile.

vi chef/cookbooks/loadbalancer/templates/default/loadbalancer.conf.erb 
upstream appcluster {
  <% @upstream_servers.each do |ip_address| -%>
  server <%= ip_address %>;
  <% end -%>
}

server {
  listen 80;
  server_name load_balancer_test;

  location / {
    proxy_pass http://appcluster;
  }
}

With this all in place we can create a role using our recipes:

vi chef/roles/load_balancer.rb
name "load_balancer"
description "load balancer using nginx"
run_list "recipe[ntp]", "recipe[ubuntu]", "recipe[loadbalancer]"

And update the Vagrantfile to create a new VM from this role as well as pass in the upstream server IP addresses:

Vagrant::Config.run do |config|
  # Define and configure application servers
  app_servers = { :app1 => '192.168.1.44',
                  :app2 => '192.168.1.45'
                }

  app_servers.each do |app_server_name, app_server_ip|
    config.vm.define app_server_name do |app_config|
      app_config.vm.box = "precise32"
      app_config.vm.host_name = app_server_name.to_s
      app_config.vm.network :hostonly, app_server_ip
      app_config.vm.provision :chef_solo do |chef|
        chef.cookbooks_path = "chef/cookbooks"
        chef.roles_path = "chef/roles"
        chef.data_bags_path = "chef/data_bags"
        chef.add_role "application_server"
      end
    end
  end

  # Configure load balancer
  config.vm.define :load_balancer do |load_balancer_config|
    load_balancer_config.vm.box = "precise32"
    load_balancer_config.vm.host_name = "loadbalancer"
    load_balancer_config.vm.network :hostonly, "192.168.1.43"
    load_balancer_config.vm.provision :chef_solo do |chef|
      chef.cookbooks_path = "chef/cookbooks"
      chef.roles_path = "chef/roles"
      chef.data_bags_path = "chef/data_bags"
      chef.add_role "load_balancer"
      chef.json = {
        'loadbalancer' => {
          'upstream_servers' => ['192.168.1.44:1337','192.168.1.45:1337']
        }
      }
    end
  end

end

If you still have your app servers running you only need to run vagrant up load_balancer to bring up the new node. Or you can just run vagrant up to bring up everything.

Now you should be able to test against the new load balancer node and observe that you rotate between each application server:

$ curl 192.168.1.43:80
Hello from app1
$ curl 192.168.1.43:80
Hello from app2

If you bring down an app server with vagrant destroy app1 and test the load balancer again you should still get a response from the remaining app server (the first time the response may take longer depending on the timeout value you configure in nginx).

You can checkout all the configuration used in this post on my github repository.