# Friday, March 6, 2015

I’ve collected a few notes together into a working Ruby script that shows just a little of what the Chef API can do for future reference.

The Chef API provides a route into collecting Chef data; perhaps for an environments and nodes status report.  The Chef API can be used to integrate Chef into your CI/CD process. But before spending too much time on the API it is worth understanding what the out of box tools like Knife can do.

Knife is a Chef command line tool that wraps the API into a very complete and well documented set of commands. The advantage of Knife is that it will be in sync with the API. Writing your own code to use the API runs the risk that with breaking changes in the API your code will no longer work.

It is important to version control changes to the CI/CD process. Changes to Chef artefacts such as attributes and cookbooks should be version controlled; Json files containing attribute values stored in SCM and uploaded from SCM into Chef following change is a good way to go.

So with those warnings and advice out of the way here’s the Chef API in a Ruby script… 

I’ll assume you have Chef all nicely installed and Ruby is in your executable path. The following Ruby script should be placed in the chef-repo folder so that it can make use of an existing setup and configuration.

The first step is connecting to the Chef server. These three lines of Ruby script will do that…

require 'chef'
Chef::Config.from_file("./.chef/knife.rb")
rest = Chef::REST.new(Chef::Config[:chef_server_url])

image

The configuration file will need to define the correct pem file for accessing the Chef server.  To keep it simple I’ve used the knife.rb file located in the .chef folder of chef-repo.

image

The Ruby script file is put into the chef-repo folder so that the knife.rb file can be referenced relative to that folder. This makes it clear which chef server and what credentials are being used.

image

So lets see the script run.

ruby chef_api_notes_demo.rb

It simply lists the environments and updates some attribute data in an environment[1]  It also lists the nodes.

[1] I’ve commented out the section in the script that does the updates. However just a quick warning. Although it is unlikely that you have an environment with version attributes for component1 and component2 please be aware that if you do… the script will update those attributes… if you enable that section of script.

image

Here’s the script which I put into a file called ruby_chef_api_notes_demo.rb in the chef-repo folder…

require 'chef'
#add a bit of colour by extending the string class
class String
def g;c(self,"\e[0m\e[32");end
def r;c(self,"\e[0m\e[31");end
def c(t,c_c,m='m') "#{c_c}#{m}#{t}\e[0m" end
end
#get an ***appropriate*** chef configuration file
#this defines what we connect to and the key to use...
#***key needs to relate to the url***

Chef::Config.from_file("./.chef/knife.rb")
#with the config loaded
#establish a restful connection according to the loaded config.

rest = Chef::REST.new(Chef::Config[:chef_server_url])
puts "=== Connected to [#{Chef::Config[:chef_server_url]}] ==="
#get a list of the environments
#this is a hash - a hash is a collection of key value pairs

environments = rest.get_rest("/environments")
puts "\n-----ENVIRONMENTS-----"
environments.each do |key, value|
puts "----------------------".g
puts "Environment name : #{key}".g
env_url = value
puts " URL : #{env_url}"
#get the environment instance
environment = rest.get_rest(env_url)
#puts environment.inspect # .inspect is useful to view whole object instance
d_attributes = environment.default_attributes
unless d_attributes.nil?
d_attributes.each do |key, value|
puts "----------------------".g
puts "Attribute key : [#{key}]"
unless key == "tags"
puts "Current source value : [#{value["version"]}]"
end
end
end
#The attributes can be accessed directly and set to a new value

#begin
#d_attributes["component1"]["version"] = "3.7.1"
#d_attributes["component2"]["version"] = "5.12.1"
#rescue
#end
#begin
#rest.put_rest(env_url, environment)
#rescue
#puts "Did not update environment [#{environment.name}]".r
#else
#puts "Updated environment [#{environment.name}] OK"
#end
puts "----------------------".g
end
puts "\n-----NODES------------"
#get a list of the nodes
#this is a hash - a hash is a collection of key value pairs

nodes = rest.get_rest("/nodes")
#iterate through the nodes hash
nodes.each do |key, value|
#key is the node name...
#value is the URL in this case...

puts "----------------------".g
puts "Node name : [#{key}]".g
node_url = value
puts " URL : [#{node_url}]"
puts "----------------------".g
#now get the node object using the URL
#can then read the node data
#and if required can update some data

node = rest.get_rest(node_url)
#show the environment that this node is under
puts "Environment...".g
puts "#{node.environment}"
puts "----------------------".g
end



.
Tags: Chef | Chef API | Ruby

Friday, March 6, 2015 9:34:49 AM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Wednesday, February 25, 2015

Back at the beginning of February I showed how Microsoft Release Management can be used to get CI/CD up and running.

I’m now going to show you Microsoft Release Management with Chef integration.

Integrating Microsoft RM with Chef provides additional capabilities on top of the out of box RM experience. image

Chef uses recipes (written in ruby) that allow a box to be fully defined as a Chef node; not only in terms of the software and files required to be on the box but also how the software stack on the box is configured. A Chef node has a one to one relationship with an actual box (or machine); e.g. a particular macaddress.

Once a Chef node has been set up it can read meta data from the box that it represents. So even though in this example we are only putting a couple of components onto a box there is going to be benefit from additional contextual data describing the box that is not otherwise available centrally. It may also be the case that pre-requisites or third party components are defined in a Chef node. Chef can be used to manage all the software required on a box. Chef provides the tools and processes to implement ‘Infrastructure as Code’.

I will use the term Manifest to represent a list of components each at a particular version. There is nothing called Manifest in Chef but the Chef node Run List combined with the Chef node Attributes are in effect just that; a Manifest that sits between the CI build and the CD deploy.

image

Lets take a look at what happens when a CI build of a component triggers a release through RM with Chef. The build definition (shown below) for Bye has been configured to make a call to Microsoft Release Management. See http://blogs.msdn.com/b/visualstudioalm/archive/2014/10/10/trigger-release-from-build-with-release-management-for-visual-studio-2013-update-3.aspx

imageCheckin of change to the Bye application source code will trigger a build. Once the drop of the build has completed the post drop script InitianteRelease.ps1 will be run. The arguments specified are the RMServer, the port number that RM is using, the Team Project that contains the build definition and the target stage. In my example volcanic-8 1000 Team1 Dev.

The Release Template that maps to Team1, Dev and  Bye (the build definition) is the one that gets used to run the deployment.

image

The Release Template RT2 (shown above) has been configured to update attributes in a Chef node called VOLCANICLAB-5.rock.volcanic.com with the drop location of a build of Bye; the result is that the drop path is made available as a Chef node attribute that can then be consumed in a Chef recipe.

The appropriate NodeName and AttributeName are set up in “Deploy Using Chef” in the Release Template. The example below is from RT2.

image

Microsoft Release Management now has all it needs to update the attributes of the VOLCANICLAB-5.rock.volcanic.com node for a particular build of Bye.

The attribute having been updated by Microsoft Release Management is shown below.image

When Microsoft Release Management runs the chef client on the box named VOLCANICLAB-5.rock.volcanic.com the run list below will execute. The cookbook volcanic_bye is listed.

image

volcanic_bye is a Chef cookbook with a recipe that I created to deploy Bye (see below).

Notice the source attribute [‘volcanic_bye’][‘source’]. When this recipe is run it will pick up path \\volcanic-1\drop\Bye\Bye_20150225.1 from the attribute as it was previously updated by Microsoft Release Management.

image

Here is the log from the chef client that Microsoft Release Management ran on the box VOLCANICLAB-5.rock.volcanic.com .

image

 

Summary

Chef provides a useful abstraction layer between CI (i.e. the building of binaries and files) and CD (i.e. the deployment of binaries and files).

That abstraction layer could be referred to as a ‘Infrastructure as Code’.

Microsoft Release Management can be configured to update attributes in that Chef abstraction layer.



.
Tags: ALM | Chef | DEVOPS | Release Management

Wednesday, February 25, 2015 7:27:41 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Thursday, January 22, 2015

Docker and containers are causing something of a stir in the Linux ALM DEVOPS world. A bit of a revolution is taking place.

Microsoft has been quick to respond by announcing that it will include Docker capabilities in the next release of Windows Server.

So what is this revolution all about? In a word… “containers” and I think there is going to be a positive impact on DEVOPS costs.

Docker containers are similar to VMs but the difference is they do not contain an OS kernel and make good use of a union file system. The result is that containers are layered and have a very small footprint. They “boot” quickly; in seconds rather than what can be many minutes (or longer) for VMs.

Docker is a toolset and a cloud service repository that can be used to collect an application and all its dependencies together into a container for deployment onto host machines. A development team and individual developers can benefit through the use of previously constructed and quality controlled base images pulled from a repository. A base image could be the single fixed foundation upon which all development would create higher level images. If the application in the container works on a developer desktop; it will work everywhere. How often do you hear that?

image 

A developer working on an App can use Docker to create a container for the App from a base image and all its dependencies which can then be shipped as a runnable unit. The definition of what to put in the container can be defined in a Dockerfile

A container is portable and has everything to run the application on any host that has a Docker Engine. By default a container runs isolated from the other containers on a host. By pulling a container image from a repository an app can be quickly used in the development process or distributed to many hosts and run. Tools such as Chef could be used to manage Docker hosts as nodes. Also worth considering would be Microsoft Release Management; in particular if there was already an investment in TFS. I think that  a complete CI/CD process to work with Docker would be an essential ingredient to success.image 

 

Docker is about

Speed to market by bringing live or customer application stacks back to the developer desktop. Achieving repeatable application consistency by containerization. Getting faster testing by being very fast to set up and run. It is also quite possible to envisage cost savings where multiple VMs with different versions or flavours of applications for testing are replaced by a single VM running multiple containers with different versions.

image

There are also going to be cost savings around the boot times of VMs versus the boot times of containers. Time is money; particularly in the cloud.  A shipping container analogy is reasonable. Shipping containers conform to a single template; the size is defined, the attachment points are exactly the same and the weight of the contents can be constrained. Containerization has dramatically reduced products and materials shipping time. Not so long ago a ship’s cargo of boxes and bags of all sorts had to be handled one at a time onto the next mode of transport or into a warehouse. The analogy does not end there; a container can be sealed and in this way be known to conform to a particular shipping manifest. Although the generation and management of a “manifest” will most likely require more than a Dockerfile.

Development without containers

Development Teams are busy writing code that gets built into lots of applications that need testing and releasing. Applications depend on other applications and perhaps on third party applications; an application stack.

From any single developer’s desktop via testing labs to live machines or customers machines there is a need to repeatedly bring together appropriate dependencies as a working stack; so that a known quality in the context of those dependencies can be proven prior to final shipment.

Bringing together dependencies and ensuring that prerequisites are installed has evolved somewhat from manual installation and configuration using an often incomplete document through to today’s improved toolsets and processes that can offer a fully automated capability and do repeat runs. But even with that improved capability there can be issues.

In order to ensure a clean start before testing applications, machines are sometimes created from the ground up. Creating an entire machine even when it is a VM and is fully automated takes time. VM base images are large and cumbersome. The process of spinning up a machine involves amongst other things booting up an OS. In order to save time and money applications are often installed onto an existing up and running machine. This may even be required; to allow the emulation of what may or may not happen when that application is installed on live machines or is installed by customers.

When applications are installed or re-installed onto existing machines over and over again it is possible for there to be conflict in the dependencies shared between applications. This conflict needs to be identified and resolved which can take time and necessitate heroic efforts on the part of DEVOPS teams. It is also possible that un-install will not be a tidy and complete operation. Sometimes the install is a manual process that may involve copying files or configuring. The problem is that conflict can happen over and over on different hosts and in different ways when an application and its stack of dependencies is not fixed.

Docker containers “fix” the dependency stack

Below is a schematic typical of a machine representing some loose un-fixed applications and an OS.

image

Docker gives us a way of containing and “fixing” the complete stack required for any particular application; including the base operating system files, the file system and the application itself. Putting a Docker container onto a Docker host machine or removing a container from a host is as clean as the shipping container analogy suggests; it’s either all there or it’s gone.

Docker, Inc. provide the tools to set up and manage the full lifecycle of a container and its construction from a Docker image. In a typical scenario the contained applications will have dependencies on one another and there may be some contained configuration of those applications.

The un-fixed apps from the schematic above could be put into containers as follows:

image

Containers are sealed units. However access can be provided via various routes, one of which involves the use of ports. Docker provides functionality for mapping of ports used by applications across container walls. For example perhaps an Apache web server application is running as the Web Server (WS) inside container 1. If the container port 8080 is mapped to a host port the application is exposed via the Host Machine URL; shown in the diagram below.

image

A full service

Docker, Inc. recognised an opportunity to pull together a few threads at the right time and have put together an IT offering that is more than a set of commands to create containers; it is a fully loaded end to end container service. Docker, Inc. provides a set of tools to support image creation and a cloud based repository service for storing images; below is a simple lifecycle example showing how files can be pulled from any suitable source control system (In this case TFS) to build an image based on a Dockerfile and then run as a container. So in the example below if the developer decides test1 app works ok on his desktop the container can then be committed as a new image with a label and pushed into the Docker Hub cloud based repository for further distribution onto other hosts.

image

Availability

Docker is new; currently at version 1.4 and the Docker Engine is currently only available for Linux.

Microsoft has a partnership with Docker and has announced its intention to incorporate Docker Engine capability into their latest Windows Server operating system; I expect this capability may prove significant to Microsoft stack development teams, Testers and Windows DEVOPS teams.

Microsoft have pushed an ASP.NET 5 image into the Docker cloud repository.

image

Links

https://www.docker.com/

http://www.infoworld.com/article/2834122/application-virtualization/windows-server-is-getting-docker-says-microsoft-and-docker.html

http://blogs.msdn.com/b/webdev/archive/2015/01/14/running-asp-net-5-applications-in-linux-containers-with-docker.aspx

.

Tags: ALM | ASP.NET 5 | Chef | Containers | DEVOPS | Docker | Linux | Quality | Release Management | Windows

Thursday, January 22, 2015 3:34:15 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]