# Monday, May 18, 2015

 

The function GetProjectNodeInfoFromReportFolder() is typically used to identify where the top data node is for an SSRS report.

  • So the project GUID and/or the collection GUID.

The table that has this data in it is DimTeamProject found in the TFS warehouse.

Using this table given a ‘report path’ an appropriate Team Project can be identified.

DimTeamProject is what GetProjectNodeInfoFromReportFolder() uses to get the project node info for a path.

Here are a few notes around the use of GetProjectNodeInfoFromReportFolder().

 

Collecting the report path from the SSRS Built-in Field called ReportFolder.

A report path is required as a parameter to GetProjectNodeInfoFromReportFolder(). We need to get this from somewhere and set this path into a parameter (@vReportPath in this example) to use in the call.

Conveniently SSRS provides us with a way to do this using one of its Built-in-Fields; ReportFolder.

Note that in this example the report author has provided an alternative through the ExplicitProject parameter which is used to supply the report path if the reporting service Built-in Field called ReportFolder has no value; this will be the case when working on a report locally.

image

 

The call to GetProjectNodeInfoFromReportFolder() with report path parameter.

This call is made after the parameter has been been set; @vReportPath in this example.  ProjectGuid, ProjectNodeName and ParentNodeSK (which usefully is the SK of the collection) are returned. These returned values can then be used to update SSRS parameters for use in subsequent SQL queries for a report.

image

The DimTeamProject table.

Whenever a Team Project is created it is allocated a named folder in Reporting Service; the ReportPath.

Reporting Services makes this available to a running report as the global Built-in Field called ReportFolder.

 image

The Report Folder.

The value for the SSRS global Built-in Field called ReportFolder is the path/location of the uploaded rdl files as below in Reporting Services.

image

.

Tags: Reports | SSRS | TFS

Monday, May 18, 2015 10:06:05 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0]


# Friday, March 6, 2015

I’ve collected a few notes together into a working Ruby script that shows just a little of what the Chef API can do for future reference.

The Chef API provides a route into collecting Chef data; perhaps for an environments and nodes status report.  The Chef API can be used to integrate Chef into your CI/CD process. But before spending too much time on the API it is worth understanding what the out of box tools like Knife can do.

Knife is a Chef command line tool that wraps the API into a very complete and well documented set of commands. The advantage of Knife is that it will be in sync with the API. Writing your own code to use the API runs the risk that with breaking changes in the API your code will no longer work.

It is important to version control changes to the CI/CD process. Changes to Chef artefacts such as attributes and cookbooks should be version controlled; Json files containing attribute values stored in SCM and uploaded from SCM into Chef following change is a good way to go.

So with those warnings and advice out of the way here’s the Chef API in a Ruby script… 

I’ll assume you have Chef all nicely installed and Ruby is in your executable path. The following Ruby script should be placed in the chef-repo folder so that it can make use of an existing setup and configuration.

The first step is connecting to the Chef server. These three lines of Ruby script will do that…

require 'chef'
Chef::Config.from_file("./.chef/knife.rb")
rest = Chef::REST.new(Chef::Config[:chef_server_url])

image

The configuration file will need to define the correct pem file for accessing the Chef server.  To keep it simple I’ve used the knife.rb file located in the .chef folder of chef-repo.

image

The Ruby script file is put into the chef-repo folder so that the knife.rb file can be referenced relative to that folder. This makes it clear which chef server and what credentials are being used.

image

So lets see the script run.

ruby chef_api_notes_demo.rb

It simply lists the environments and updates some attribute data in an environment[1]  It also lists the nodes.

[1] I’ve commented out the section in the script that does the updates. However just a quick warning. Although it is unlikely that you have an environment with version attributes for component1 and component2 please be aware that if you do… the script will update those attributes… if you enable that section of script.

image

Here’s the script which I put into a file called ruby_chef_api_notes_demo.rb in the chef-repo folder…

require 'chef'
#add a bit of colour by extending the string class
class String
def g;c(self,"\e[0m\e[32");end
def r;c(self,"\e[0m\e[31");end
def c(t,c_c,m='m') "#{c_c}#{m}#{t}\e[0m" end
end
#get an ***appropriate*** chef configuration file
#this defines what we connect to and the key to use...
#***key needs to relate to the url***

Chef::Config.from_file("./.chef/knife.rb")
#with the config loaded
#establish a restful connection according to the loaded config.

rest = Chef::REST.new(Chef::Config[:chef_server_url])
puts "=== Connected to [#{Chef::Config[:chef_server_url]}] ==="
#get a list of the environments
#this is a hash - a hash is a collection of key value pairs

environments = rest.get_rest("/environments")
puts "\n-----ENVIRONMENTS-----"
environments.each do |key, value|
puts "----------------------".g
puts "Environment name : #{key}".g
env_url = value
puts " URL : #{env_url}"
#get the environment instance
environment = rest.get_rest(env_url)
#puts environment.inspect # .inspect is useful to view whole object instance
d_attributes = environment.default_attributes
unless d_attributes.nil?
d_attributes.each do |key, value|
puts "----------------------".g
puts "Attribute key : [#{key}]"
unless key == "tags"
puts "Current source value : [#{value["version"]}]"
end
end
end
#The attributes can be accessed directly and set to a new value

#begin
#d_attributes["component1"]["version"] = "3.7.1"
#d_attributes["component2"]["version"] = "5.12.1"
#rescue
#end
#begin
#rest.put_rest(env_url, environment)
#rescue
#puts "Did not update environment [#{environment.name}]".r
#else
#puts "Updated environment [#{environment.name}] OK"
#end
puts "----------------------".g
end
puts "\n-----NODES------------"
#get a list of the nodes
#this is a hash - a hash is a collection of key value pairs

nodes = rest.get_rest("/nodes")
#iterate through the nodes hash
nodes.each do |key, value|
#key is the node name...
#value is the URL in this case...

puts "----------------------".g
puts "Node name : [#{key}]".g
node_url = value
puts " URL : [#{node_url}]"
puts "----------------------".g
#now get the node object using the URL
#can then read the node data
#and if required can update some data

node = rest.get_rest(node_url)
#show the environment that this node is under
puts "Environment...".g
puts "#{node.environment}"
puts "----------------------".g
end



.
Tags: Chef | Chef API | Ruby

Friday, March 6, 2015 9:34:49 AM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Friday, February 27, 2015

TFS update 4 (and possibly previous updates)  run code coverage works one way and not the other.image

I’ve got a couple of unit tests setup and I want TFS build code coverage .

I turned on code coverage as below and got no results with the TFS build.

image

The following way using the Add/Edit dialog from the ellipsis (…) as below does work.

So editing the build definition in this specific way. Process -> 1.Automated tests –> 1.Test source. Then using the ellipsis (…) pop up the Add/Edit dialog and then under Options dropdown change it from None to Enable Code Coverage. image

Now I get code coverage results. I don't know smile

image

And now I can see code coverage in the reports. Smile

image



.
Tags: Code Coverage | Reports | TFS | TFS Issues

Friday, February 27, 2015 3:07:11 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Wednesday, February 25, 2015

Back at the beginning of February I showed how Microsoft Release Management can be used to get CI/CD up and running.

I’m now going to show you Microsoft Release Management with Chef integration.

Integrating Microsoft RM with Chef provides additional capabilities on top of the out of box RM experience. image

Chef uses recipes (written in ruby) that allow a box to be fully defined as a Chef node; not only in terms of the software and files required to be on the box but also how the software stack on the box is configured. A Chef node has a one to one relationship with an actual box (or machine); e.g. a particular macaddress.

Once a Chef node has been set up it can read meta data from the box that it represents. So even though in this example we are only putting a couple of components onto a box there is going to be benefit from additional contextual data describing the box that is not otherwise available centrally. It may also be the case that pre-requisites or third party components are defined in a Chef node. Chef can be used to manage all the software required on a box. Chef provides the tools and processes to implement ‘Infrastructure as Code’.

I will use the term Manifest to represent a list of components each at a particular version. There is nothing called Manifest in Chef but the Chef node Run List combined with the Chef node Attributes are in effect just that; a Manifest that sits between the CI build and the CD deploy.

image

Lets take a look at what happens when a CI build of a component triggers a release through RM with Chef. The build definition (shown below) for Bye has been configured to make a call to Microsoft Release Management. See http://blogs.msdn.com/b/visualstudioalm/archive/2014/10/10/trigger-release-from-build-with-release-management-for-visual-studio-2013-update-3.aspx

imageCheckin of change to the Bye application source code will trigger a build. Once the drop of the build has completed the post drop script InitianteRelease.ps1 will be run. The arguments specified are the RMServer, the port number that RM is using, the Team Project that contains the build definition and the target stage. In my example volcanic-8 1000 Team1 Dev.

The Release Template that maps to Team1, Dev and  Bye (the build definition) is the one that gets used to run the deployment.

image

The Release Template RT2 (shown above) has been configured to update attributes in a Chef node called VOLCANICLAB-5.rock.volcanic.com with the drop location of a build of Bye; the result is that the drop path is made available as a Chef node attribute that can then be consumed in a Chef recipe.

The appropriate NodeName and AttributeName are set up in “Deploy Using Chef” in the Release Template. The example below is from RT2.

image

Microsoft Release Management now has all it needs to update the attributes of the VOLCANICLAB-5.rock.volcanic.com node for a particular build of Bye.

The attribute having been updated by Microsoft Release Management is shown below.image

When Microsoft Release Management runs the chef client on the box named VOLCANICLAB-5.rock.volcanic.com the run list below will execute. The cookbook volcanic_bye is listed.

image

volcanic_bye is a Chef cookbook with a recipe that I created to deploy Bye (see below).

Notice the source attribute [‘volcanic_bye’][‘source’]. When this recipe is run it will pick up path \\volcanic-1\drop\Bye\Bye_20150225.1 from the attribute as it was previously updated by Microsoft Release Management.

image

Here is the log from the chef client that Microsoft Release Management ran on the box VOLCANICLAB-5.rock.volcanic.com .

image

 

Summary

Chef provides a useful abstraction layer between CI (i.e. the building of binaries and files) and CD (i.e. the deployment of binaries and files).

That abstraction layer could be referred to as a ‘Infrastructure as Code’.

Microsoft Release Management can be configured to update attributes in that Chef abstraction layer.



.
Tags: ALM | Chef | DEVOPS | Release Management

Wednesday, February 25, 2015 7:27:41 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Wednesday, February 4, 2015

Using Microsoft TFS with Microsoft Release Management you can successfully automate from code check-in to deployed and ready to test.image

You might already have some of this happening; at least you’ve got the first part.
image

Microsoft Release Management bridges that chasm.

Release Management install instructions are here; https://msdn.microsoft.com/en-us/library/dn593704.aspx

With Release Management Server and Client installed and Release Management Agent installed on a test lab you are ready to go. Let’s look at an example.
I’m going to have two cheap and cheerful components; Bye and Hello. Eventually it will be useful to see what happens with more than one component; I’ll start this specific blog with just one though. Both these components produce an executable which is then packaged using WIX into an MSI. The objective is to make a change to any component and have the result installed automatically onto a test lab VM. Here are the two components in TFS source control. It makes good sense to keep components under a single folder; $/Team1/Src in this particular case. This folder could end up containing 100’s of components. While on the subject of folders there is a path name length limit of 256 characters; so keep folder and file names short.

image
For each component there will be a build definition. Note that they will all use the same build template though. The build definition I created for Bye is ByeX.  I’ve highlighted the settings used in ByeX. The template is ReleaseTfvcTemplate.12.xaml. The “Source Settings” are scoped to the source code; a change made to the code for Bye will trigger a build and then a deploy via Release Management.

Build Definition ByeX looks as follows:

image

Release Management setup to deploy the result of ByeX looks as follows:

image

image

So let’s see it working end to end. Currently on lab VolcanicLab-1. Bye is orange yellow.

image

This matches the source code. Let’s change orange yellow to blue green and check in the change.

image

Checking in the change will trigger a CI build of the build definition ByeX. Which will result in the production of a new executable of Bye coloured blue/green. WIX will pack the executable into a new Bye.msi installer.

image

If all goes to plan the result of the code change will end up installed on VolcanicLab-1.

image

And of course this is logged in Release Management.

image

So we have crossed the chasm with check-in through to a lab box automatically updated using Release Management.

image

.

Tags:

Wednesday, February 4, 2015 6:43:10 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Thursday, January 22, 2015

Docker and containers are causing something of a stir in the Linux ALM DEVOPS world. A bit of a revolution is taking place.

Microsoft has been quick to respond by announcing that it will include Docker capabilities in the next release of Windows Server.

So what is this revolution all about? In a word… “containers” and I think there is going to be a positive impact on DEVOPS costs.

Docker containers are similar to VMs but the difference is they do not contain an OS kernel and make good use of a union file system. The result is that containers are layered and have a very small footprint. They “boot” quickly; in seconds rather than what can be many minutes (or longer) for VMs.

Docker is a toolset and a cloud service repository that can be used to collect an application and all its dependencies together into a container for deployment onto host machines. A development team and individual developers can benefit through the use of previously constructed and quality controlled base images pulled from a repository. A base image could be the single fixed foundation upon which all development would create higher level images. If the application in the container works on a developer desktop; it will work everywhere. How often do you hear that?

image 

A developer working on an App can use Docker to create a container for the App from a base image and all its dependencies which can then be shipped as a runnable unit. The definition of what to put in the container can be defined in a Dockerfile

A container is portable and has everything to run the application on any host that has a Docker Engine. By default a container runs isolated from the other containers on a host. By pulling a container image from a repository an app can be quickly used in the development process or distributed to many hosts and run. Tools such as Chef could be used to manage Docker hosts as nodes. Also worth considering would be Microsoft Release Management; in particular if there was already an investment in TFS. I think that  a complete CI/CD process to work with Docker would be an essential ingredient to success.image 

 

Docker is about

Speed to market by bringing live or customer application stacks back to the developer desktop. Achieving repeatable application consistency by containerization. Getting faster testing by being very fast to set up and run. It is also quite possible to envisage cost savings where multiple VMs with different versions or flavours of applications for testing are replaced by a single VM running multiple containers with different versions.

image

There are also going to be cost savings around the boot times of VMs versus the boot times of containers. Time is money; particularly in the cloud.  A shipping container analogy is reasonable. Shipping containers conform to a single template; the size is defined, the attachment points are exactly the same and the weight of the contents can be constrained. Containerization has dramatically reduced products and materials shipping time. Not so long ago a ship’s cargo of boxes and bags of all sorts had to be handled one at a time onto the next mode of transport or into a warehouse. The analogy does not end there; a container can be sealed and in this way be known to conform to a particular shipping manifest. Although the generation and management of a “manifest” will most likely require more than a Dockerfile.

Development without containers

Development Teams are busy writing code that gets built into lots of applications that need testing and releasing. Applications depend on other applications and perhaps on third party applications; an application stack.

From any single developer’s desktop via testing labs to live machines or customers machines there is a need to repeatedly bring together appropriate dependencies as a working stack; so that a known quality in the context of those dependencies can be proven prior to final shipment.

Bringing together dependencies and ensuring that prerequisites are installed has evolved somewhat from manual installation and configuration using an often incomplete document through to today’s improved toolsets and processes that can offer a fully automated capability and do repeat runs. But even with that improved capability there can be issues.

In order to ensure a clean start before testing applications, machines are sometimes created from the ground up. Creating an entire machine even when it is a VM and is fully automated takes time. VM base images are large and cumbersome. The process of spinning up a machine involves amongst other things booting up an OS. In order to save time and money applications are often installed onto an existing up and running machine. This may even be required; to allow the emulation of what may or may not happen when that application is installed on live machines or is installed by customers.

When applications are installed or re-installed onto existing machines over and over again it is possible for there to be conflict in the dependencies shared between applications. This conflict needs to be identified and resolved which can take time and necessitate heroic efforts on the part of DEVOPS teams. It is also possible that un-install will not be a tidy and complete operation. Sometimes the install is a manual process that may involve copying files or configuring. The problem is that conflict can happen over and over on different hosts and in different ways when an application and its stack of dependencies is not fixed.

Docker containers “fix” the dependency stack

Below is a schematic typical of a machine representing some loose un-fixed applications and an OS.

image

Docker gives us a way of containing and “fixing” the complete stack required for any particular application; including the base operating system files, the file system and the application itself. Putting a Docker container onto a Docker host machine or removing a container from a host is as clean as the shipping container analogy suggests; it’s either all there or it’s gone.

Docker, Inc. provide the tools to set up and manage the full lifecycle of a container and its construction from a Docker image. In a typical scenario the contained applications will have dependencies on one another and there may be some contained configuration of those applications.

The un-fixed apps from the schematic above could be put into containers as follows:

image

Containers are sealed units. However access can be provided via various routes, one of which involves the use of ports. Docker provides functionality for mapping of ports used by applications across container walls. For example perhaps an Apache web server application is running as the Web Server (WS) inside container 1. If the container port 8080 is mapped to a host port the application is exposed via the Host Machine URL; shown in the diagram below.

image

A full service

Docker, Inc. recognised an opportunity to pull together a few threads at the right time and have put together an IT offering that is more than a set of commands to create containers; it is a fully loaded end to end container service. Docker, Inc. provides a set of tools to support image creation and a cloud based repository service for storing images; below is a simple lifecycle example showing how files can be pulled from any suitable source control system (In this case TFS) to build an image based on a Dockerfile and then run as a container. So in the example below if the developer decides test1 app works ok on his desktop the container can then be committed as a new image with a label and pushed into the Docker Hub cloud based repository for further distribution onto other hosts.

image

Availability

Docker is new; currently at version 1.4 and the Docker Engine is currently only available for Linux.

Microsoft has a partnership with Docker and has announced its intention to incorporate Docker Engine capability into their latest Windows Server operating system; I expect this capability may prove significant to Microsoft stack development teams, Testers and Windows DEVOPS teams.

Microsoft have pushed an ASP.NET 5 image into the Docker cloud repository.

image

Links

https://www.docker.com/

http://www.infoworld.com/article/2834122/application-virtualization/windows-server-is-getting-docker-says-microsoft-and-docker.html

http://blogs.msdn.com/b/webdev/archive/2015/01/14/running-asp-net-5-applications-in-linux-containers-with-docker.aspx

.

Tags: ALM | ASP.NET 5 | Chef | Containers | DEVOPS | Docker | Linux | Quality | Release Management | Windows

Thursday, January 22, 2015 3:34:15 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Tuesday, March 12, 2013
Technorati Tags:
Technorati Tags:

 

I don’t want to get into the whys and wherefores just yet; I’d like to start with the assumption that branches exist in your TFS ALM implementation.

You may not be comfortable with those branches though.

You may feel that too much cost is involved in ‘using’ those branches! The cost of merging, the cost of creating new branches, the cost of updating build definitions and the cost of increased complexity.

Any two branches have a relationship and this relationship has a cost.

The simple reason is that branches split up code. Code that represents a single product. Code that is the product.

Your live service and website is at any point in time derived from a particular singular unique set of code. Same goes for the DVD that has your latest product on it or the product download site.

However, no two ways about this; split code is a copy of code that can then be changed and will then be different from where it was copied from. That difference should only exist for a particular clearly defined purpose. So a purposeful change would be to check in code related to a Task. Hopefully that Task would be related to a Product Backlog Item which has been added to a Sprint. These workitems are in the MS Scrum 2.0/2.1 which is is the default process template in TFS 2012.

Branches serve a purpose; but they need to be thought about very carefully.

A set of code will exist that represents a live product and this live code is usually separated from any changes that will be taking place; such as a piece of development work or a spike of R&D. It seems sensible to keep a copy of that live code safe so that it can be used for any patches that need to be made to the live product to keep it going before the next release.

We could get into the detail of what a main branch is and if a live branch exists or not. But rather than dwell on that too much I’d rather have us say for the moment that there are branches involved and you are merging code between them.

I’m going to list a few topics as points that I think can improve branch quality and throughput efficiency, in future editions of this blog I’ll try take these topics a step further.

Not sure yet which would be a good topic to start on, but I’m open to suggestions!

Some topics are related and so will get bundled together.

Branching topics (In no particular order):

1 Reduce the amount of merging required to the minimum possible; have fewer branches.
It’s all about speed to market; less steps to get you there will help…
However of course this will depend on the quality of the code that goes to market.
Do less branches impact that quality?
2 Reduce or eliminate complexity of merging; merge often.
Be ready and prepared to merge, know the branch quality all the time; auto builds, auto deployment, auto testing.
No excuses here.
3 Understand what is being merged. I’ve seen some really good custom approaches to this topic.
4 Validate the code quality on all branches; aim to use Integration builds. Some sort of buddy or manual build and Nightly is the very minimum.  Continuous is best if you can achieve it.
Use the same builds and same tests on all the branches for the same product. So that the same files, the same binaries, the same installer and the testing is consistent.  There can be an issue here around configuration and database environment specific data.
Builds and tests are part of the product code and get  merged around with the product.
5 Speed up the builds. Good to know sooner rather than later that a change does not build.
So look at ways of doing incremental builds. That is incremental Continuous Integration builds.
The ‘Clean Workspace’ option can be set to none via the basic section in a build definition’s process parameters; no code change no build.
Consider using a NUGet enterprise repository to store built versions of components; so that they can be pulled into an integration build and do not have to be rebuilt. http://inedo.com/proget/overview
I know of an organisation that has a very clever bespoke way of understanding what needs to be built via changeset and a component dependency tree. The thing is; if you build solution files you need a way of calculating dependency.
I suspect that NUGet would be a reasonable alternative.
6 Understand what is waiting to be merged and how long that list is.
Report on this. Heads up of the release.
7 Communicate the ‘purpose’ of a branch; update this all the time.
Communicate and document any test environments that may be specific to the branch.
Understand, manage and ensure that the correct work is undertaken on branches with a view to reducing  or eliminating conflict.
Take work from the Product Backlog.
Check in work against Tasks.
Make use of Sprints.
Iterations. Releases. Teams.
Areas.
Make sure that development teams are fully aware of the purpose of the existing branches.
8 Divide up the product into components to help scope the merge.
Components will simplify build definition workspace mapping; reduce the build process get latest overhead. Faster builds.
Have build definitions that are scoped to build the individual component parts of the product.
Product sliced up into components will reduce the likelihood for teams to be working on the same code.
Have a folder structure that is clear and unambiguous about which component lies under a folder.
Contain all code and artefacts that relate to building a particular component under that folder.
Consider putting build definitions into version control.
9 Having the product divided up into components does not mean that you stop feature based development.
Features involve creating new components or updating and changing existing components.
10 Facilitate visibility of work on a particular component that is taking place on different branches.
Clarify the merge overhead by helping provide a component based view of the merge prior to release and merge of updated product.
11 Refactoring branches.
Do this actively all the time.
Prune and cut back irrelevant branches.
Update the branching documentation.
12 Keep development branches absolutely up to date with live product code as it is delivered.
So using the familiar RI and FI paradigm in whatever guise that takes.
Prevent development branches merging into main if they are out of sync with the latest RIs into main.
13 If development takes place on child of main; merge into the main branch should overwrite.
The resolution of conflict should be dealt with on the child branch first by integrating changes from the main branch; through FI.
The result is no conflict when merging into the parent main branch; through RI.
14 Do not allow branches to exist for too long.
Once they have achieved their purpose get rid of them.
Re- branch for new work.
Main is an exception and will exist for the lifetime of the product. This would be the same for development taking place on main.
15 Consider using main branch only for development work; rather than development work on children of main branch.
This may sound rather extreme but it is a very agile way of working.
Release branches would be taken from main; the hope would be to avoid having to make any significant changes on that branch.
e.g.image
16 Consider Team Projects as a way of encapsulating build definitions for a particular branch.
Do you have a very long rather messy list of build definitions?
That long list incidentally may not be version controlled; unless you have put together some custom code to handle it.
Yes, Team Projects can sit over the branch structure!
Useful way of controlling access to a branch; per team. So Team Project contributors only.
17 Speeding up the availability of code for testing by way of automated deployment  should be an objective.
To achieve this there needs to be a way of defining which server types components are installed onto.
There needs to be a definition of test environments and what server types exist in those environments.
Contained in these definitions in combination with an understanding of dependency, services and app pools can be automatically stopped and started.
Quite often organisations write there own deployment processes and systems.
There are products available to help out with automated deployment such as Nolio: http://www.noliosoft.com/ and InRelease : http://www.incyclesoftware.com/inrelease/
18 It should be an objective to build once and install that same packaged build all the way to the live deployment; be that a website or DVD. NUGet could help here too.
19 Automatically version stamp assembly.info and installer packages where used so that version and build info can be  seen in the file info properties and add remove programs. It’s great to be able to look in add remove programs to see what is installed. This is a simple and well documented modification to the TFS Build workflow.
20 Last; but has the clear honour of being number 20 are database deployments.
Yes the database schema needs to be versioned and it needs to be automatically deployed.
Some write custom code and scripts to do this; others use tools in combination with scripts.
It is often a very complex area; but will be a showstopper unless fully addressed.
This has to work seamlessly with code component deployments. There needs to be an understanding of dependency of code component on database ‘components’.
Changes to schema and objects such as stored procedures need to be incorporated into a live database baseline  that can then be used to build a fully functioning version of live. Changes that are not yet live are then ‘appended’ onto the baseline for testing.


.
Tags:

Tuesday, March 12, 2013 10:52:18 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Monday, March 11, 2013

 

Introduction

In many organisations a TFS installation will comprise of more than one project collection. The data from all project collections is stored in the same TFS warehouse.

This is useful in that it means cross organisation reports can be written; Views of whole organisation products and backlogs are feasible.

The data from multiple project collections sits in the same tables; so keys such as WorkItemSK, System_ID and System_Rev are not unique across collections.

When multiple collections were introduced for TFS 2010, GUIDs were introduced into the TFS tables. Where SQL is not scoped to a particular collection, GUIDs must be used in joins. We need to use these GUID keys in joins; rather than integer keys such as WorkItemSK.

Updating SQL to use the GUID key; WorkItemBK

So looking back at the ‘How to create a Day by Day based SQL Report’ example from my previous blog; ReportsHowToCreateADayByDayBasedSQLReport the following steps can be followed to update that SQL query for a multi collection installation.

The WorkItemBK must be used in the join between DimWorkItem and FactWorkitemHistory.

A couple of changes need to be made for this to happen. This change would need to be made to any SQL that makes joins using WorkItemSK; where the join is not scoped to the project collection.

Basically we replace WorkItemSK with WorkItemBK.

WorkItemBK is a GUID; so provides the uniqueness that is needed to make joins between tables in the multiple collection scenario.image

 

Here is the original SQL that we used to get the last revision of workitems for each day date; collecting WorkitemSK as a key to be used later in the join to FactWorkItemHistory.

 

image

Here is the same SQL updated to collect the GUID; replacing the WorkItemSK with WorkitemBK.

Note that we still need to use the System_ID and System_Rev to get the latest revision; so to ensure that this is unique we scope the sub query to the project collection by adding  the TeamProjectCollectionSK into the sub query join criteria.

 

Lastly the SQL that sums up the Effort for Product Backlog must be updated to use the collected GUID WorkItemBK in the join to FactWorkItemHistory.

There is an interesting trick involved in using the GUID as can be seen below.

Notice that previously I used fwih.RecordCount > –1 in the join.

This is no longer required as the record count has SNAGHTML492e9aabeen incorporated into the key WorkItemHistoryBK.

The GUID fwih.WorkItemHistoryBK appends the RecordCount column onto the end of the GUID.

 

 

image

 

Here is the original SQL using WorkItemSK.

 

 

 

 

 

image

 

Here is the SQL updated to use WorkItemBK.

Note how ‘|1’ has been appended onto WorkItemBK in the join.

 

 

 

Summary

Use the GUID keys (e.g. WorkItemBK) in joins rather than the integer keys (e.g. WorkItemSK).

You may want to review existing reports where WorkItemSK is used.

The example SQL will now list an organisation product backlog (for Product Backlog Items) where multiple project collections are used.

Product Backlog SQL updated to use GUIDs

SET ANSI_WARNINGS ON -- Specifies SQL-92 standard behaviour for several error conditions
SET XACT_ABORT ON -- If a Transact-SQL statement raises a run-time error, the entire transaction is terminated
SET NOCOUNT ON -- Stops the message that shows the count of the number of rows affected by a Transact-SQL statement
SET DATEFIRST 7 -- Sunday is first day of week -- this is actually the default

-------------------------------------------------------------------------------------------
-- Some basic Burndown Report Data
-- Demonstrates array of dates and attaching data into that array
-- Version March 2013 - RippleRock
-- Updated to use GUID keys in joins
-------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
-- Get the minimum date and maximum date from the data
-------------------------------------------------------------------------------------------
DECLARE @vDataSetStartDate AS DATE
DECLARE @vDataSetEndDate AS DATE

SELECT @vDataSetStartDate = CONVERT(DATE, MIN(dwi.System_ChangedDate), 112)
    ,@vDataSetEndDate = CONVERT(DATE, MAX(dwi.System_ChangedDate), 112)
FROM dbo.DimWorkItem dwi
WHERE dwi.System_WorkItemType = 'Product Backlog Item'

-------------------------------------------------------------------------------------------
-- Create an array of dates; this will be the basis of the report x axis
-- into @tblDates
-------------------------------------------------------------------------------------------
DECLARE @tblDates AS TABLE (DayDate DATE NOT NULL);

WITH ReleaseDates (ReleaseDate)
AS (
    SELECT ISNULL(@vDataSetStartDate, GETDATE()) AS ReleaseDate
   
    UNION ALL
   
    SELECT DATEADD(d, 1, ReleaseDate) AS ReleaseDate
    FROM ReleaseDates
    WHERE DATEADD(d, 1, ReleaseDate) <= ISNULL(@vDataSetEndDate, GETDATE())
    )
INSERT INTO @tblDates (DayDate)
SELECT CONVERT(DATE, rd.[ReleaseDate], 112)
FROM ReleaseDates rd
OPTION (MAXRECURSION 0)

-------------------------------------------------------------------------------------------   
-- Join the last revision of each workitem under the DayDate into the dates
-------------------------------------------------------------------------------------------
DECLARE @tblDatesWithWorkitems AS TABLE (
    DayDate DATE NOT NULL
    ,System_ID INT
    ,System_Rev INT
    ,WorkItemBK NVARCHAR(128)
    ,System_State NVARCHAR(256)
    )

INSERT INTO @tblDatesWithWorkitems (
    DayDate
    ,System_ID
    ,System_Rev
    ,WorkItemBK
    ,System_State
    )
SELECT datesArray.DayDate
    ,dwi.System_ID
    ,dwi.System_Rev
    ,dwi.WorkItemBK
    ,dwi.System_State
FROM @tblDates datesArray
    ,dbo.DimWorkItem dwi
WHERE dwi.System_Rev IN (
        SELECT MAX(System_Rev) -- we want the last revision of each workitem under the DayDate
        FROM dbo.DimWorkItem wil
        WHERE wil.System_ID = dwi.System_ID AND wil.TeamProjectCollectionSK = dwi.TeamProjectCollectionSK
            AND (CONVERT(DATE, wil.System_ChangedDate, 112) <= datesArray.DayDate)
        )
    -- capture max rev
    -- so the last change up to that day for each of those Product Backlog Items
    -- keep any data that we will be filtering by in the next stage; so in this case System_State
    AND dwi.System_WorkItemType = 'Product Backlog Item'
ORDER BY DayDate

-------------------------------------------------------------------------------------------   
-- Return the data we are interested in back to the report
-- Summed effort of Product Backlog Items for each day (which we filtered out previously)
-- that are in a 'not done state'
-------------------------------------------------------------------------------------------
SELECT datesArray.DayDate
    ,ISNULL(SUM(fwih.Microsoft_VSTS_Scheduling_Effort), 0) AS Effort
FROM @tblDatesWithWorkitems datesArray
INNER JOIN dbo.FactWorkItemHistory fwih ON datesArray.WorkItemBK + '|1' = fwih.WorkItemHistoryBK
WHERE
    -- filter by system state to get the 'not done' effort for that date
    (
        datesArray.System_State <> 'Done'
        AND datesArray.System_State <> 'Removed'
        AND datesArray.System_State <> 'Resolved'
        )
GROUP BY DayDate -- we sum up the filtered effort for each day
ORDER BY DayDate



.
Tags:

Monday, March 11, 2013 10:38:36 AM (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Friday, March 8, 2013

 

Introduction

This is a description of how to put together a report that tracks change to some data over time. This example will be a basic Product Backlog report.

image

For this work we will use Report Builder 3.0 and use the query editor in MS SQL Management Studio to write the SQL.

You need to have a way of writing, running, tuning, and testing the SQL for your reports. One way to do this is with SQL Management Studio; it provides good visibility of the structure of TFS databases and provides good tools for creating the SQL that will ultimately be used in the report. 

clip_image025clip_image002

You could use the graphical UI or SQL text editor found in Report Builder;  but to tweak and develop the SQL you will most likely discover that you need to get more control and feedback than the Report Builder editor provides. The editor in SQL Server Management Studio has Intellisense which is useful. Another good alternative would be to use Visual Studio.

For this example report we will need to read data from a couple of  tables in the TFS Warehouse; specifically DimWorkitem and FactWorkitemHistory.

So we need to set up SQL Management Studio to connect to TFS SQL server; which when connected looks like the image on the right.

We can then select DimWorkItem and click ‘New Query’.

We’ll write some SQL that will generate an array of dates for the X axis and for each of those dates sum up the effort outstanding on that date. The full SQL is appended at the end of this article.

Writing the SQL for the report.

We need a start date and end date for the array of dates. We’ll simply get the minimum and maximum dates from the workitem data; so selecting from DimWorkitem we’ll set @vDataSetStartDate and @vDataSetEndDate as below. Also we’ll limit this example to the ‘Product Backlog Item’ workitem type. No effort is made here to restrict the data to a particular Project Collection or Team Project; so be aware that the volume of data returned could be quite large.

imageimage

With the start and end date handy we can then populate an array of dates. We’ll put the dates into a temporary table called @tblDates. You could use select * from @tblDates to see the dates in this table (see the inset to the right); remember to remove the select * before using the SQL in a report.

image

For each of those dates in @tblDates we’ll get all the latest revisions of workitems up to and including the DayDate. For each date we store the data that will be used for filtering in the last step where we will sum up remaining effort based on workitems that are in the ‘not done’ state. We will need the WorkItemSK (which is the unique id for a revision) to join into the FactWorkitemHistory. The ID and the Rev columns are for reference and debug. Use select * from @tblDatesWithWorkitems to see the dates and data in this table; remember to remove it before using the SQL in the report.

imageimage

The data we want requires joining DimWorkitem with FactWorkitemHistory; INNER JOIN on the WorkitemSK. There are rows in FactWorkitemHistory that are irrelevant; so filter them out using fwih.RecordCount > -1.

Finally filter by system state for the ‘not done’ workitems from the temporary table @tblDatesWithWorkitems and SUM up Effort for each day by grouping on DayDate. So for example on 2012-10-02 join each row that matches our state criteria to FactWorkitemHistory and sum up.imageimage

image

 

So that’s the data ready to go.

Importantly it is in the exact shape and format that we need for our report. An array of Dates with associated backlog of effort.

We could enhance this data with additional information about sprints, teams and velocity.

We could add filtering for Project Collection, Team Project, Iteration, Areas and the like. 

That will be a subject for another article!

Getting the data into a report. Creating a DataSet.

Open up MS SQL Server Report Builder. Close the wizard and go straight into the application. Create a Data Source pointing at the TFS SQL Server.

image

Add a data source that points at the TFS warehouse. Here I use the ‘Use a connection embedded in my report’ option with connection type ‘Microsoft SQL Server’ and name the connection TFS2012.

imageimageimage

Create a dataset using the previously created data source.

SNAGHTML62b04c1

In the Dataset Properties dialog open the Query Designer, select ‘Edit as Text’ and paste in the SQL that was previously created in SQL Server Management Studio. Run the SQL to check that it works ok. Although of course if you are using SQL Server Management Studio to develop this type of report SQL you will have already seen that it works; so you would not normally need to do this.

imageimage

Add a table to the report so that you can view the data that is going to be charted. This is a useful way to verify that the data looks shipshape! The table can be deleted later. Lift click and drag DayDate and Effort into the table.

imageimage

Add a chart.

imageimage

Select the first Line type chart.

Then left Click drag DayDate onto the chart as a category group.

image

Then left Click drag Effort as a value.

image

Use the expression editor and edit ‘Effort’ to remove the Sum() and then OK to save the modified expression.

image

Run the chart.

That’s it. You have day by day data in a SQL Report.

image

 

  • Summary.

  • The approach taken here is a pattern used in many SQL Reports that require data to be charted against days.
  • Use SQL to generate an array of dates into a temporary table. These dates should be limited by the date range of the data being charted. For each Date get the maximum revision for the workitems with change date falling before or on the Date. Use additional filters where appropriate to define the workitems; but remember the aim is to get the latest revision of each workitem for the Date. So for that date we can look at things as they were on the day; as we were looking at them through a Query run on that day. The latest revision may not have taken place that day; it most likely will be an earlier day. Finally SUM up by grouping the data by Date.

 

The complete SQL.

SET ANSI_WARNINGS ON -- Specifies SQL-92 standard behaviour for several error conditions

SET XACT_ABORT ON -- If error, the entire transaction is terminated

SET NOCOUNT ON -- Stops the message count of the number of rows affected SQL statement

SET DATEFIRST 7 -- Sunday is first day of week -- this is actually the default

-------------------------------------------------------------------------------------------

-- Some basic Product Backlog Report Data

-- Demonstrates array of dates and attaching data into that array

-- Version March 2013 - RippleRock

-------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------

-- Get the minimum date and maximum date from the data

-------------------------------------------------------------------------------------------

DECLARE @vDataSetStartDate AS DATE

DECLARE @vDataSetEndDate AS DATE

SELECT @vDataSetStartDate = CONVERT(DATE, MIN(dwi.System_ChangedDate), 112)

,@vDataSetEndDate = CONVERT(DATE, MAX(dwi.System_ChangedDate), 112)

FROM dbo.DimWorkItem dwi

WHERE dwi.System_WorkItemType = 'Product Backlog Item'

-------------------------------------------------------------------------------------------

-- Create an array of dates; this will be the basis of the report x axis

-- into @tblDates

-------------------------------------------------------------------------------------------

DECLARE @tblDates AS TABLE (DayDate DATE NOT NULL);

WITH ReleaseDates (ReleaseDate)

AS (

SELECT ISNULL(@vDataSetStartDate, GETDATE()) AS ReleaseDate

UNION ALL

SELECT DATEADD(d, 1, ReleaseDate) AS ReleaseDate

FROM ReleaseDates

WHERE DATEADD(d, 1, ReleaseDate) <= ISNULL(@vDataSetEndDate, GETDATE())

)

INSERT INTO @tblDates (DayDate)

SELECT CONVERT(DATE, rd.[ReleaseDate], 112)

FROM ReleaseDates rd

OPTION (MAXRECURSION 0)

-----------------------------------------------------------------------------------------

-- Join the last revision of each workitem under the DayDate into the dates

-----------------------------------------------------------------------------------------

DECLARE @tblDatesWithWorkitems AS TABLE (

DayDate DATE NOT NULL

,System_ID INT

,System_Rev INT

,WorkitemSK INT

,System_State NVARCHAR(256)

)

INSERT INTO @tblDatesWithWorkitems (

DayDate

,System_ID

,System_Rev

,WorkItemSK

,System_State

)

SELECT datesArray.DayDate

,dwi.System_ID

,dwi.System_Rev

,dwi.WorkItemSK

,dwi.System_State

FROM @tblDates datesArray

,dbo.DimWorkItem dwi

WHERE dwi.System_Rev IN (

SELECT MAX(System_Rev) -- we want the last revision of each workitem under the DayDate

FROM dbo.DimWorkItem wil

WHERE wil.System_Id = dwi.System_Id

AND (CONVERT(DATE, wil.System_ChangedDate, 112) <= datesArray.DayDate)

)

-- capture max rev

-- so the last change up to that day for each of those Product Backlog Items

-- keep any data that we will be filtering by in the next stage; so in this case System_State

AND dwi.System_WorkItemType = 'Product Backlog Item'

ORDER BY DayDate

-------------------------------------------------------------------------------------------

-- Return the data we are interested in back to the report

-- Summed effort of Product Backlog Items for each day (which we filtered out previously)

-- that are in a 'not done state'

-------------------------------------------------------------------------------------------

SELECT datesArray.DayDate

,ISNULL(SUM(fwih.Microsoft_VSTS_Scheduling_Effort), 0) AS Effort

FROM @tblDatesWithWorkitems datesArray

INNER JOIN dbo.FactWorkItemHistory fwih ON datesArray.WorkitemSK = fwih.WorkItemSK

AND fwih.RecordCount > - 1

WHERE

-- filter by system state to get the 'not done' effort for that date

(

datesArray.System_State <> 'Done'

AND datesArray.System_State <> 'Removed'

AND datesArray.System_State <> 'Resolved'

)

GROUP BY DayDate -- we sum up the filtered effort for each day

ORDER BY DayDate



.
Tags:

Friday, March 8, 2013 10:33:32 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]