# Sunday, 22 January 2017

Yesterday I migrated one of our TFS collections to VSTS using  Microsoft's migration guide for moving from TFS to VSTS . I won’t lie, it was a pretty long process and it took a lot of going back and fourth to make sure I fully understood the guide which is a PDF 58 pages long. The guide comes with several checklists and things you need to check and prep before your migrations.

A very rough outline of what happens is that you run a check against your TFS using the tool provided to ensure everything is exportable, if there are problems you go about fixing them following suggestions from the tool and then running the check again until you are ready to go. Next you you will run a prep that will generate some files you will need to map your users across followed by making a database backup as a DACPAC package and entering your import invite codes (provided by Microsoft). These are then uploaded to an Azure storage account and you kick off the migration process which uses these assets to import your data into a brand new VSTS instance.

I won’t go into details about how to do the migration as this is covered in the guide, however I will highlight some things you should take into account before you migrate from TFS to VSTS which is done using a tool provided in the guide called the TFSMigrator.

Azure Active Directory

You are going to have to make sure you have this in place or have at least thought about it. If you use Active Directory in your organisation a good thing to look at is replicating this to Azure, your migration is going to need this. If you are not using Active Directory but just accounts on the box as I did for this migration, you can easily map these across to Azure Active Directory accounts. If you have Office 365, then you already have access to an Azure Active Directory setup (depending on your subscription) and you can make use of this. The reason Azure directory is important, is that this is how VSTS will authenticate your users once you have migrated across to VSTS.

Plan for some downtime to make backups

Even when doing a test migration as I did, you need to plan for some downtime. One of the reasons for this is that you will need to generate a DACPAC project of your TFS Collection. In order to do this you have to take the TFS Collection Offline and then detach it from TFS. If you have not done this before you may be put off by the ominous warnings from the TFS Admin Console asking you to tick a box stating you have made a backup of your TFS databases.

After you have detached your TFS Collection and made a DACPAC of it, you can then reattach your collection so your team can continue working as usual.

Learn what a DACPAC is

Yes I had never used one before. The guide will give you some details with a sample command line to use to create one. Effectively DACPACs are short for Data-tier Application Package. These are generated from SQL Server itself. It is basically a way of exporting your whole TFS Collection database with everything that it needs to be re-created. “tables, views, and instance objects, including logins – associated with a user’s database”. The DACPAC package will be uploaded to an Azure storage blob that the migration tool uses.

Learn about Azure Storage Accounts and SAS

While I have used Azure Storage Accounts before , I found this part quite complicated and it took me a while to get it right. Basically the DACPAC package your create from your TFS Collection database gets uploaded to an Azure Storage account along with a mapping file for user accounts. The hardest part I found was having to workout how to create an SAS token URL to the where I had stored these in an Azure storage account. The guide will provide you with a link to some PowerShell you can sue that will generate this URL for you. I am not sure why Azure couldn’t create this link for you (I did try) but eventually used the PowerShell provided that worked first time.

Azure PowerShell tools

Make sure you have the Azure PowerShell tools installed, you will need these for running some PowerShell to generate an SAS token url to your Azure Storage account (see above).

Final Notes

I would recommend reading the guide fully before getting started. Also note that currently you have to request an import code in order to use the service. You will get two of these, one is for a dry run to ensure it works and the next one is for your production import. This is when you are fully committed and feel confident it all went to plan in the dry run.



.
Tags: TFS | VSTS

Sunday, 22 January 2017 11:18:16 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Wednesday, 25 November 2015

We’ve been on the preview of “Release” for a while now and have been using it for several deployments. Its a great product when you have figured out how it works and as the documentation improves this should become easier.

Currently we are using Release to publish the same ASP.NET MVC web application to 3 websites with different parameters in each config (replaced with tokenisation) and to drop a packaged version of our web app for download from an external website.

Sounds pretty cool doesn’t it? Well it took a lot of getting around and I am keen to hear back from anyone who may have better ways of handling the configuration file part. Much of what we have done so far has been trial and error mainly between my colleague Richard Erwin and myself.

In this article I will cover deployment to an Azure website. In a later article I will cover how we deployed to an IIS web server hosted on an Azure virtual machine followed by wrapping up software for download from your website. Just as a warning we use a self hosted Release Agent to do our deployments and this example will probably only work with a self hosted Release Agent.

Update note:
For the purposes of this article, I forgot to add that we made use of the Custom VSO-Tasks for Zip, Unzip and Tokenisation which you will need to install before hand.

Creating a parameterised deployment of an ASP.NET MVC application with Release to an Azure website

image

In the image above you can see the steps involved in Release.

Setting up a vNext Build with parameters

The first step we need to do before we get to Release is to create a Build in VSO that

  • Uses a parameterised xml file with tokens to replace the parameters in your web.config (or other configurations files).
  • Create an MSDeploy zip file as its output

If you have done all of the above and just want to get to the Release bit, scroll down to Setting up Release.

Custom Parameterized XML file with Tokens
As you can see in the image below I have a parameters.xml file in the route of my MVC application if you have not used this type of file before you can find out more about it here. Its a pretty standard part of MSDeploy which is what you are using behind the scenes.

image

The only difference in our file is that we have replaced the default values with tokens. Tokens are represented with the following syntax __MYTOKEN__ . These tokens will be replaced later by a process in a our Release workflow.  We are basically telling this file to replace the parameters (represented by the match statements) in our web.config and visualisation.config files above by the defaultValue which in this case we have placed our token in for replacement later by Release.

Create an MS Deploy zip file as the output of our build.
In VSO create a new vNext Build using the Visual Studio template

image

On the “Visual Studio Build” Step select your solution (you will be prompted to locate this in TFS version control (or GIT) when clicking the button with the 3 dots next to the option.

image

In your MSBuild arguments you will need to do the following

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true

The DeployOnBuild and WebPublishMethod arguments tell the build to create the MSDeploy packages for us. This is simply a zip file that MSDeploy can use to deploy to an IIS box or an Azure website.

Further down in Publish Build Artifacts we tell the build what we want as an output from this build. In our case all we want is the MSDeploy zip file. In this case we represent this by asking the publish step to find a zip file for us in the build output using the following syntax “**\*.zip” (see image below)

image

Setting up Release
For the purposes of this article I am doing an Azure Web deployment first but will follow up with another article that will go through IIS deployments.

NOTE
You will need to install the  Custom VSO-Tasks for Zip, Unzip and Tokenisation before continuing.

A note about Release Agents
This article makes the assumption that you have at least some familiarity with Release. In this section we are using our own release agent installed on a virtual machine. A release agent is basically the same concept as how a build in VSO can have its own agent that you can host yourself instead of using the hosted build agent. If you do not have your own Release Agent setup there is a guide here on how to do so. You basically run a PowerShell script on a machine you wish to act as your release agent. If you are experimenting you can even use your own desktop or laptop as a release agent. Release Agents will need Internet access and be located where they can see the target environment you are deploying to.

Go into your VSO project and select the Release tab. Create a new Release in this case we are doing an Azure Website Deployment.

image

The default release you will see only has two tasks inside it. For the purposes of our setup we had no need for the Visual Studio Test which can be deleted.

image

In our release we added the following tasks (see the image below) which I will go into more detail below.

Its a good idea now to set your environment to use your hosted Release Agent if you haven’t done so already. You can do so by clicking on the 3 dots next to the environment name and selecting Agent Options and setting the Default queue to your Release Agent.

image

“Artifacts”
To select the contents of the vNext build you created previously select the Artifacts tab and select “Link an artifact source” button. Basically you are releasing the contents of a build.

image

Unzip Task
Select the Environments tab again and Add a task this task will be under utilities and be called UnZip. You can drag this task to the top of the list by holding down on it with your mouse.

In your Unzip Task you can select the zip file that is provided by your build output (this is the step we did previously in creating a build above). The output of our build is a zip file used by MS Deploy. We are just telling the Unzip task to unzip this. Note you will have to run at least one successful build to see the contents of your build using the navigation used by the 3 dots button next to the option.

image

The target folder is a folder on our build agent. The above path comes from clicking on the 3 dots next to the target folder. In this case I have only gone one folder deep and placed my own folder name in there called “VSO” which the zip task will create and unzip the contents of the package to.

Tokenisation: Transform file
Add a tokenisation task in the same way you added an Unzip task above.

Remember the previous step we did above “Custom Parameterized XML file with Tokens”? All we are doing is telling our Tokenisation task to find that parameters.xml file  we created in that task in the folder that was created in Unzip task above. This task will replace the tokens in our parameters.xml file with our custom variables (you can read more about where to set these further down).

image

Batch Script
This is probably the least elegant part of my solution and I am open to any suggestions people might have to improving it. In order to make the MS Deploy package work again we need to zip it up, unfortunately we can’t just use the Zip task that is available to us as MS Deploy will for some reason ignore any zip file that was not created with MS Deploy! To get around this we had to install MS Deploy on the build agent box (this is why I am using our own build agent).

image

This batch script task basically tells Release to execute the batch file located on my build agent server with three parameters. The script is listed below and lives in a path on the build agent indicated in the image above.

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"  -verb:sync -source:archiveDir=%1 -dest:package=%2 -declareParam:name="IIS Web Application Name",defaultValue=%3,tags="IisApp" ^
-declareParam:name="IIS Web Application Name",type="ProviderPath",scope="IisApp",match="^.*PackageTmp$" ^
-declareParam:name="IIS Web Application Name",type="ProviderPath",scope="setAcl",match="^.*PackageTmp$"

All the batch file does is takes the 3 arguments which are

  • The working folder we unzipped our website to and ran the tokenisation task on
  • Where we would like to place our new MSDeploy package and call it, in this case its VBX.zip which we are placing in the working directory of our agent. In the example above ours is: $(Agent.ReleaseDirectory)\VisualisationBoard2015\VSO.zip
  • The name of our IIS website.

It uses these to recreate the MSDeploy package for us again.

Note arguments are separated by a space and each argument is placed inside quotes.

Azure Web App Deployment Task
Finally we get to the last task. If all went well in the batch file task above, we should be able to tell this task to use the MS Deploy package we just created in the previous step. In this case it is

$(Agent.ReleaseDirectory)\VisualisationBoard2015\VSO.zip

image

Note if you are unsure how to setup your Azure subscription and website on the Azure deployment task you can find out how to do so here . If the Azure web App does not exist Release will create it for you.

Where do I put my parameters for tokenization?
While still in your Release click on the Configuration tab. This is where you enter the tokens you wish to replace in your parameters.xml file for example the parameter __AUTHENTICATION__ is simply represented by the name AUTHENTICATION without the underscores. The tokenisation task will look for these here.

image

The tokenisation task will also check your environment specific variables which can be found on each configured environment.

image

The beauty of this is that you can have a different set of variables per environment.

Once you are done you can now kick off a Release and see if it works!

Trouble Shooting
We have found when trouble shooting release. It helps to have access to the Release Agent machine you are using so you can see what is happening in its working directory. Usually an issue could be down to mistyping a name or getting a directory path wrong.

I am keen to hear back from anyone who has a better way of using release for tokenized website deployments.



.
Tags: TFS

Wednesday, 25 November 2015 14:31:53 (GMT Standard Time, UTC+00:00)  #    Comments [4]


# Wednesday, 24 June 2015

I wrote this article more as a reminder to myself on the process I need to go through to make a web application written in ASP.NET (MVC) that uses the TFS API to actually work. I have done this several times now but keep on forgetting some of the key information. Some of the errors you may get if you haven’t set this up correctly are.

Error HRESULT E_FAIL has been returned from a call to a COM component.

Microsoft.TeamFoundation.WorkItemTracking.Client.DataStore.DataStoreNative

There are two things you need to set correctly and they are your Web.Config and IIS.

Web.Config
The first port of call is to setup the following in your web.config. Basically we are saying we want to use Windows authentication in our app and to turn on impersonation.

1 <system.web> 2 <authentication mode="Windows" /> 3 <identity impersonate="true" /> 4 5 <authorization> 6 <deny users="?" /> 7 </authorization> 8 </system.web> 9 . 10 . 11 . 12 <system.webServer> 13 <validation validateIntegratedModeConfiguration="false"/> 14 . 15 . 16 </system.webServer> 17

IIS Settings
The rest of the settings are dealt with in IIS.

Authentication
In IIS click on your website and then select Authentication from the Features menu. Set these to the following (as per the image). Basically ASP.NET Impersonation, Windows Authentication are set to enabled. Anonymous should be set to Disabled.

image

App Pool Settings
Go to advanced settings on your App Pool, one thing you may need to set here is “Enable 32-Bit Applications” if you are working with the TFS Client API (this can be found under (General))

Scroll down to Process Model and find an identity section. This for a newly created app is usually set under the App Pool Identity account. This needs to be set to either a domain account that has access on the box or I have seen the local system and local service accounts also work here. However I believe this is only the case if you have set TFS to run under one of these as a service. In my case I have used an AD account that has access to the box. The next important step here is to set “Load User Profile” to true. Setting this appears to be critical especially when working with the WorkItem Tracking Client. I believe it needs to create a cache on disk when it does this. Not setting the Load User Profile may prevent it from doing this.

image



.
Tags: TFS

Wednesday, 24 June 2015 13:04:47 (GMT Daylight Time, UTC+01:00)  #    Comments [0]


# Thursday, 21 May 2015

This post is more for my own reference as I spent a bit of time trying to figure this one out. For a long time now you have been able to choose your own Team field in TFS and let TFS know this using the following method https://msdn.microsoft.com/en-us/library/vstudio/dn144940.aspx . I was recently working on an application where I needed to know the Team Fields name programmatically.  Below is the code I used to do this.

For the code below to work you will need to include

  • Microsoft.TeamFoundation.Client
  • Microsoft.TeamFoundation.Framework.Client;
  • Microsoft.TeamFoundation.Framework.Common;

In the example below we pass in the Guid of the TFS Collection we are after followed by the TFS project name. We then use the Process Configuration Service to get us the details about the project

1 var _tfsTeamProjectCollection = new TfsTeamProjectCollection(new Uri(_tfsUrl), 2 new System.Net.NetworkCredential(_tfsUserName, _tfsPassword)); 3 4 _tfsTeamProjectCollection.EnsureAuthenticated(); 5 6 var teamProjectCollection = 7 _tfsTeamProjectCollection.ConfigurationServer.GetTeamProjectCollection(tfsCollectionGuid); 8 9 var settings = teamProjectCollection.GetService<ProjectProcessConfigurationService>(); 10 11 ICommonStructureService4 cssService = teamProjectCollection.GetService<ICommonStructureService4>(); 12 13 var projectInfo = cssService.GetProjectFromName(projectName); 14 15 var proc = settings.GetProcessConfiguration(projectInfo.Uri); 16 17 var sets = proc.TypeFields; 18 19 string teamField = null; 20 21 foreach (var typeField in sets) 22 { 23 if (typeField.Type == FieldTypeEnum.Team) 24 { 25 teamField = typeField.Name; 26 } 27 }

I am sure the above code example can be tidied up and adapted a bit more.



.
Tags: TFS

Thursday, 21 May 2015 09:19:40 (GMT Daylight Time, UTC+01:00)  #    Comments [0]


# Tuesday, 20 January 2015

This was one that passed me by over the festive period. If you use the online version of TFS, you can now edit code inside the repository from the web interface and add new files to the repository completely from the web interface.

http://www.visualstudio.com/en-us/news/2014-dec-17-vso

The above functionality does not appear to be included in Update 4 of the standalone version of TFS.



.
Tags: TFS | TFS Tools

Tuesday, 20 January 2015 18:25:21 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Thursday, 08 May 2014

This is more for my own reference. If you use iSubscriber in your applications you may have noticed they’ve stopped working in Update 2 of TFS 2013 because they can’t find WorkItemChangedEvent.

This is now located in Microsoft.TeamFoundation.WorkItemTracking.Server which can be found here.

C:\Program Files\Microsoft Team Foundation Server 12.0\Application Tier\Web Services\bin\Microsoft.TeamFoundation.WorkItemTracking.Server.dll



.
Tags: TFS

Thursday, 08 May 2014 13:11:15 (GMT Daylight Time, UTC+01:00)  #    Comments [0]


# Tuesday, 09 July 2013

LeanKit if you haven’t heard of it is a popular Kanban board available as a service over the Internet. I’ve used it myself and seen it used on several client sites in the past. However the one question I seem to be hearing lately is “can’t we use LeanKit for our tasks and TFS for code?”.

Its a reasonable question, Team Foundation Server depending on what version you are using gives you quite a lot of functionality out of the box. You could argue that if you have TFS 2012 you wouldn’t need to use a separate KanBan board. However just because you have a KanBan board doesn’t necessarily mean you’re writing code and an organisation may have been set on LeanKit way before TFS was introduced.

So can we have both?
You certainly can, and one sleepy afternoon I decided to see if I could get the two to talk to each other. The initial  integration between the two was a lot easier than I thought it would be, anyone who is familiar with REST and the TFS API would find it a breeze! And yes it answered the question of could it be done but it also raised several more questions!

  • Why?
  • What about conflicts?
  • Does this add value?

Now the “why” is probably self explanatory from the paragraphs above. However conflicts and “Does this type of integration actually add value” weren’t that straight forward. My conundrum wasn’t getting them to work together but more what to do about someone changing something in TFS and another changing something in LeanKit?

My Solution
So I came up with the following scenario.

“If you are storing tasks and editing them in LeanKit in the first place you’re probably not that likely to want to edit them in TFS? However when you check in your code you will probably want to associate your check in with a task you have been working on.”

While I appreciate the above probably doesn’t apply to everyone, this is one way with the least amount of effort I came up with for integrating the two that could add value. There are several more that I will go into in more detail on later.

Firstly I created a program that polled LeanKit for updates (I would have preferred an event service but couldn’t find one in the LeanKit API). I’ve called my simple polling app “LeanKit Kanban Caller” it basically goes over the tasks in LeanKit and ensure they exist in TFS.

image

If a task doesn’t exist in TFS the program creates one otherwise it just updates any existing tasks if they are different.

image

The program knows which task to associate with LeanKit because I altered the default Product Backlog Item WIT from the Scrum 2.0 template in TFS 2012 to store the LeanKit ID (note this type of integration should also work with TFS 2010 on a different template).

image

For the purposes of my experiment I only mapped the title, description and size fields across but mapping others is easy. I didn’t bother with the State of the item in TFS, partly because the state in LeanKit can be changed so easily by adding more columns while in TFS its a question of editing the workflow on the WIT. While it is possible to map these to each other it would end up making your board inflexible and for the purposes of this scenario we’re not actually using that field.

I have assumed that in TFS everything is a PBI and child tasks are redundant for the purpose of this scenario.

Pros and Cons of my approach

Pros
  • The polling app can sit on any machine that has the Team Explorer client installed and has an Internet connection
  • You can associate code check-ins with tasks from your LeanKit board as they now exist in TFS
  • Using LeanKit is business as usual as no changes are made to your LeanKit board

Cons

  • Changing a task in TFS will not update the task in LeanKit
  • The states in TFS are not mapped to those in LeanKit
  • Analysis tools in TFS for the backlog have no use with this scenario.

How else could we have worked?
There are several other ways in which I could have made this solution work.

  • Treat TFS as the master and overwrite anything in LeanKit (no real value)
  • Allow updates to go both ways and make use of the datetime stamp to determine if TFS updates LeanKit or if LeanKit updates TFS (could lead to conflicts).
  • Make TFS and LeanKit responsible for different fields. This way both can update each other sound in the knowledge that no data will be overwritten.

Are there any other scenarios? If you have any suggestions I’d love to hear them.



.
Tags: LeanKit | TFS | TFS Tools

Tuesday, 09 July 2013 14:31:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0]


# Thursday, 13 June 2013

A while ago I had to do a bit of work with the TFS Subscription service. The annoying problem was I needed to find a particular subscription and the tools that came out of the box weren't very helpful.  I also wanted to keep the original subscription details but just add to it. So I created this little app

image

This little app will enable you to

  • View TFS Event Subscriptions from a machine that has the TFS 2010 or TFS 2012 client installed
  • Create a new event subscription from an existing one.
  • Unsubscribe from an event

Usage

  • Type in your TFS URL in most cases this would be http://tfs:8080/tfs if you have several instances ensure you place the correct instance onto the URL the above will usually give you the default instance. If you have an instance called “test” the url would be http://tfs:8080/tfs/test .
  • Type in your TFS username and password and hit the “List Subscriptions” button.
  • Select the subscription you’d like to look at and this will be shown in more detail in the bottom right hand section of the app called “Selected Event”
  • You can now unsubscribe or create a new event from the event you just selected. Ensure you change the event first or you may get an error having duplicate events.

In the sample picture you can see I am using the app to make a change to the Scrum For Team System event service by adding more WITs to it.

Most of the credit for the example code for this app should go to Rene van Osnabrugge’s article found here all I have done is throw a GUI on the front and added a few more functions that enable me to copy events.

Before using this app I would like to point out the following (pretty standard stuff)

  • Use of this app is completely at your own risk I or my employer accept no responsibility what so ever for any unintended effects as a result of its usage. If you are worried I would recommend downloading the code and stepping through it.
  • The code is rough and ready (I created it in a hurry) and is provided as is, by all means please look at the code before running the executable.
  • Please let me know if it was at all helpful

Download the Code

Download the TFS 2010 Executable Only 

Download the TFS 2012 Executable Only



.
Tags: TFS | TFS Tools

Thursday, 13 June 2013 14:21:06 (GMT Daylight Time, UTC+01:00)  #    Comments [2]


# Tuesday, 23 April 2013

A while a go I wrote an article on how I managed to get the test results from a build that was using PSAKE to surface in TFS’s summary view. I provided one solution to this problem in that article that would “just get you by” and have since been reminded that I said I would write another article on my second solution to the problem which was the PSAKE Incubator.

Its been a long time so its taken me a while to recollect what I did and I no longer have the old dev TFS server I originally got this working on so I will provide what I have done below in order to help anyone else who is looking for a similar solution and needs a steer in the right direction.

Basically I had removed the need for a InvokeProcess in the build XAML workflow and created a new custom activity called PSAKE Incubator. This custom activity would invoke the PSAKE powershell script natively allowing you to make use of the TFS code activity context directly in your PSAKE power shell scripts. In addition the PSAKE Incubator takes its parameters as an array and passes this through to PSAKE.

Below is the code I used to create the custom workflow activity to compile it you will need to create a custom workflow activity, Ewald Hofman has a great example here . Please also note that if you do not want to modify PSAKE if memory serves correctly you may need to create a custom activity host to handle the console output from PSAKE, hope this will be of use to someone, when I get some more time I’ll try and get a TFS box setup to test this out again.

Please note the code below is used at your own risk and I would advise testing on a dev environment before using on your production system.

If you do find the code below useful, please drop me a line and let me know how it went.

 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections.ObjectModel;
using System.Management.Automation;
using System.Management.Automation.Runspaces;
using System.IO;
 
 
namespace TfsWorkflowActivities
{
    using System.Activities;
    using System.Collections;
    using System.Globalization;
 
    using Microsoft.TeamFoundation.Build.Client;
    using Microsoft.TeamFoundation.Build.Workflow.Activities;
    using Microsoft.TeamFoundation.Build.Workflow.Services;
 
    [BuildActivity(HostEnvironmentOption.All)]
    public sealed class PSAKEIncubator : CodeActivity
    {
 
        public InArgument<string> TFSURL { get; set; }
        public InArgument<string> TFSTeamProject { get; set; }
        public InArgument<string> TFSBuildURI { get; set; }
        public InArgument<Hashtable> LogTargets { get; set; }
        public InArgument<string> SourceDirectory { get; set; }
 
        public InArgument<string> BuildFile { get; set; }
        public InArgument<string[]> TaskList { get; set; }
        public InArgument<Hashtable> Parameters { get; set; }
        public InArgument<Hashtable> Properties { get; set; }
        public InArgument<string> PSAKEModuleDIR { get; set; }
        public InArgument<string> LogName { get; set; }
        public InArgument<string> OutPutDir { get; set; }
 
        public OutArgument<int> ExitCode { get; set; }
 
 
        protected override void Execute(CodeActivityContext context)
        {
            IActivityTracking activityTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking((ActivityContext) context);
 
            context.TrackBuildMessage("PSAKEIncubator Started", BuildMessageImportance.Low);
 
            string tfsURL = context.GetValue(this.TFSURL);
            string tfsTeamProject = context.GetValue(this.TFSTeamProject);
            string tfsBuildURI = context.GetValue(this.TFSBuildURI);
            string outPutDir = context.GetValue(this.OutPutDir);
            string sourceDirectory = context.GetValue(this.SourceDirectory);
            string buildFile = context.GetValue(this.BuildFile);
            string[] taskList = context.GetValue(this.TaskList);
 
 
            Hashtable properties = context.GetValue(this.Properties);
            Hashtable parameters = context.GetValue(this.Parameters);
 
            string pSAKEModuleDIR = context.GetValue(this.PSAKEModuleDIR);
            string logName = context.GetValue(this.LogName);
 
            
 
            string arguments = String.Format("$global:tfsUrl = '{0}'\n",tfsURL);
                   arguments+= String.Format("$global:tfsTeamProject = '{0}'\n", tfsTeamProject);
                   arguments+= String.Format("$global:TfsBuildUri = '{0}'\n", tfsBuildURI);
                   arguments+= String.Format("$global:logTargets = @{{}}\n");
                   arguments+= String.Format("$global:logTargets.Add('TfsBuild', @{{verbosity='Progress'}})\n");
                   arguments += String.Format("$global:logTargets.Add('LogFile', @{{verbosity='Debug'; logDir='{0}\\_Logs'; logFilename='build-output.log'}});\n", outPutDir);
                            
 
            context.TrackBuildMessage("Source Directory:" + sourceDirectory, BuildMessageImportance.Low);
            
            // Replace the line below with the location and name of the powershell script you use to kick off your PSAKE Build
            string scriptPath = string.Format("{0}\\run-psake.ps1", Path.Combine(sourceDirectory, "BuildAndDeploy\\buildframework"));
 
            context.TrackBuildMessage("Script Path:" + scriptPath, BuildMessageImportance.Low);
            List<CommandParameter> parametersArgument = new List<CommandParameter>();
 
 
            Hashtable parametersCleaned = new Hashtable();
 
            foreach (DictionaryEntry valHash in parameters)
            {
                if (valHash.Value == null)
                {
                    parametersCleaned.Add(valHash.Key, string.Empty);
                }
                else
                {
                    parametersCleaned.Add(valHash.Key, valHash.Value);
                }
            }
 
            Hashtable propertiesCleaned = new Hashtable();
 
            foreach (DictionaryEntry valHash in properties)
            {
                if (valHash.Value == null)
                {
                    propertiesCleaned.Add(valHash.Key, string.Empty);
                }
                else
                {
                    propertiesCleaned.Add(valHash.Key, valHash.Value);
                }
            }
 
 
            parametersArgument.Add(new CommandParameter("buildFile",buildFile));
            parametersArgument.Add(new CommandParameter("tasklist",taskList));
            parametersArgument.Add(new CommandParameter("outputDir",outPutDir));
            parametersArgument.Add(new CommandParameter("parameters", parametersCleaned));
            parametersArgument.Add(new CommandParameter("properties", propertiesCleaned));
            parametersArgument.Add(new CommandParameter("psakeModuleDir",pSAKEModuleDIR));
            parametersArgument.Add(new CommandParameter("logName",logName));
 
 
            context.TrackBuildMessage("PSAKEIncubator passing arguments to RunScript", BuildMessageImportance.Low);
            int exitResult = RunScript(scriptPath,arguments,context,parametersArgument.ToArray());
 
            context.TrackBuildMessage("PSAKEIncubator exit code " + exitResult, BuildMessageImportance.Low);
            context.SetValue<int>(this.ExitCode, exitResult);
        }
 
 
        public string InformationNodeId { get; set; }
 
 
 
        private int RunScript(string scriptFile, string scriptText, CodeActivityContext context, CommandParameter[] parameters)
        {
 
            // create Powershell runspace
            string encodedCommand = scriptText;
 
            //BespokePSHost bespokePSHost = new BespokePSHost();
        
            Runspace runspace = RunspaceFactory.CreateRunspace();
 
            IActivityTracking activityTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking((ActivityContext)context);
            InformationNodeId = activityTracking.Node.Id.ToString("D", (IFormatProvider)CultureInfo.InvariantCulture);
 
            // open it
            context.TrackBuildMessage("PSAKEIncubator Opening runspace", BuildMessageImportance.Low);
            runspace.Open();
            
            //We set these variables so they become available in our PSAKE powershell scripts.
            // You will need to ensure these variables exist in your scripts to make use of them.
            runspace.SessionStateProxy.SetVariable("InformationNodeId", InformationNodeId);
        
            //This enables us to make use of the CodeContext directly from within PSAKE
            runspace.SessionStateProxy.SetVariable("CodeContext", context);
            // create a pipeline and feed it the script text
 
            Pipeline pipeline = runspace.CreatePipeline();
 
            context.TrackBuildMessage("PSAKEIncubator adding encoded script to pipeline", BuildMessageImportance.Low);
            pipeline.Commands.AddScript(encodedCommand);
 
 
            Command parameterCommands = new Command(scriptFile);
 
            context.TrackBuildMessage("PSAKEIncubator adding parameters to pipeline", BuildMessageImportance.Low);
            foreach (CommandParameter item in parameters)
            {
                context.TrackBuildMessage("processing param name:" + item.Name, BuildMessageImportance.Low);
 
                context.TrackBuildMessage("processing param value:" + item.Value, BuildMessageImportance.Low);
 
                if ("System.Collections.Hashtable" == item.Value.GetType().ToString())
                {
                    Hashtable hashTable = (Hashtable)item.Value;                  
                    
                   
                    foreach (DictionaryEntry valHash in hashTable)
                    {
                        string valueType = "";
 
                        if (valHash.Value != null)
                        {
                            valueType = valHash.Value.GetType().ToString();
                        }
                        else
                        {
                            valueType = "NULL";
                            //valueType = valHash.Value.GetType().ToString();
                            //valHash.Value = string.Empty;
                        }
 
                        context.TrackBuildMessage("HashVal:" + valHash.Key + ":" + valHash.Value + "(" + valueType + ")", BuildMessageImportance.Low);
                    }
 
                    context.TrackBuildMessage("---End of HASH---", BuildMessageImportance.Low);
                    
                }
 
                parameterCommands.Parameters.Add(item);
            }
 
          
            pipeline.Commands.Add(parameterCommands);
            //pipeline.Commands.Add("exit $LastExitCode;");
            pipeline.Commands.Add("Out-String");
 
            // execute the script
 
            Collection<PSObject> results = pipeline.Invoke();
 
            // close the runspace
            context.TrackBuildMessage("PSAKEIncubator closing runspace", BuildMessageImportance.Low);
            runspace.Close();
 
            // convert the script result into a single string
 
            context.TrackBuildMessage("PSAKEIncubator iterating through results", BuildMessageImportance.Low);
            StringBuilder stringBuilder = new StringBuilder();
            foreach (PSObject obj in results)
            {
                //return Convert.ToInt32(obj);
                context.TrackBuildMessage(obj.ToString(), BuildMessageImportance.Low);
                //stringBuilder.AppendLine(obj.ToString());
            }
 
            return 0;
        }
    }
}


.
Tags: PSAKE | TFS

Tuesday, 23 April 2013 14:07:51 (GMT Daylight Time, UTC+01:00)  #    Comments [0]


# Tuesday, 26 March 2013

If like me you’ve written code that makes use of Microsoft.TeamFoundation.Framework.Server you may also have made use of Microsoft.TeamFoundation.Framework.Server.TeamFoundationApplication. This is the class that contains the TFS Logging method.

If you’ve recently ported your applications over to TFS 2012 or have updated your TFS to Update 1 you may have got the following error.

Could not load type 'Microsoft.TeamFoundation.Framework.Server.TeamFoundationApplication' from assembly 'Microsoft.TeamFoundation.Framework.Server

It appears Microsoft have changed the name of the class from Microsoft.TeamFoundation.Framework.Server.TeamFoundationApplication to Microsoft.TeamFoundation.Framework.Server.TeamFoundationApplicationCore

You’ll have to change your code to reference the renamed code in order for it to work again. I hope this helps someone.



.
Tags: TFS

Tuesday, 26 March 2013 17:46:16 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Wednesday, 27 February 2013

I’ve posted this more as a reminder to myself next time I create TFSJobAgent plugins.

If you ever encounter the error below when adding a new TFSJobAgent plugin:

TF53010: The following error has occurred in a Team Foundation component or extension:
Date (UTC): 2/25/2013 2:04:52 PM
Machine: VSTSR-VM2012RTM
Application Domain: TfsJobAgent.exe
Assembly: Microsoft.TeamFoundation.Framework.Server, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v4.0.30319
Service Host:
Process Details:
  Process Name: TFSJobAgent
  Process Id: 8996
  Thread Id: 6656
  Account name: NT AUTHORITY\LOCAL SERVICE

Detailed Message: There was an error during job agent execution. The operation will be retried. Similar errors in the next five minutes may not be logged.
Exception Message: An item with the same key has already been added. (type ArgumentException)
Exception Stack Trace:    at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)
   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationExtensionUtility.LoadExtensionTypeMap[T](String pluginDirectory)
   at Microsoft.TeamFoundation.Framework.Server.JobApplication.SetupInternal()
   at Microsoft.TeamFoundation.Framework.Server.JobServiceUtil.RetryOperationsUntilSuccessful(RetryOperations operations, Int32& delayOnExceptionSeconds)

The above error was caused by me having more than one copy of my component in the plugins folder

C:\Program Files\Microsoft Team Foundation Server 11.0\Application Tier\TFSJobAgent\plugins

You’re probably wondering how this is possible? I actually had another folder in the plugins folder with a copy of my component. The TFSJobAgent uses reflection to find all components that implement the ITeamFoundationJobExtension interface at runtime this includes all subfolders under the plugins directory.



.
Tags: TFS

Wednesday, 27 February 2013 09:30:22 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Monday, 25 February 2013

There are some really useful new tools for administrators in TFS 2012, I’ve only just seen this now on Grant Holliday's blog but you can now do things such as look at the TFS Activity Log and one of the most important ones for me is TFS Job Monitoring.

You can get to all of these from your TFS installation by going to the following link.

http://your-server:8080/tfs/_oi/

For more information visit Grants blog. Enjoy!



.
Tags: TFS

Monday, 25 February 2013 14:14:57 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Friday, 01 February 2013

If you haven’t see in it already, check it out here. TFS will host Git Repositories (100% compatible), see Brian Harry’s blog for more details and links to tutorials.

http://blogs.msdn.com/b/bharry/archive/2013/01/30/git-init-vs.aspx 

There is also more about it on the Team Foundation Blog



.
Tags: TFS

Friday, 01 February 2013 16:28:05 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Tuesday, 29 January 2013

TFS has some pretty powerful functionality out of the box one of them is the merge functionality. However I’ve always felt it missed what I thought was a rather nice feature to have, and that is the ability to see a complete list of PBI’s that would be affected by a merge from one branch to another. Now you could argue that you’ll get this anyway by just looking at your PBI’s. However if you find yourself cherry picking change sets  from one branch to the other e.g. you want a finished component to use in your own branch another team has just completed or you only want to release a certain PBI to live, its nice to know if that PBI turns up.

I must point out before continuing that this approach is far from infallible and relies on good housekeeping on the part of your developers i.e. they associate their checked in change sets with work items in TFS. You could also argue that TFS already gives you the PBI’s associated with a change set. It does but you have to go through a bit of pain to actually get to it, that is you open each change set and then find the work items associated with that change set and then the PBI associated with that work item.

The Code
I have put this together as a console app you will need to reference the following in your app in order for this to work. Be sure if you have both the TFS 2010 and TFS 2012 clients installed that you do not mix DLL versions!

  • Microsoft.TeamFoundation.Client
  • Microsoft.TeamFoundation.VersionControl.Client
  • Microsoft.TeamFoundation.WorkItemTracking.Client

Firstly we just do a bit of setup such as the TFS server URL and credentials, the from and to branches etc.

   1:  //The url of your tfs server
   2:  // Don't forget if you have more than one collect you will have to indicate that in the URL!
   3:  private const string TfsServer = "http://localhost:8080/tfs";
   4:   
   5:  //Replace these with your tfs username and password
   6:  private const string TfsUserName = "tfsusername";
   7:  private const string TfsPassword = "tfspassword";
   8:   
   9:  //This is the branch that you are merging from
  10:  private const string FromBranch = "$/TestSource/Dev";
  11:   
  12:  //This is the branch you are merging to.
  13:  private const string ToBranch = "$/TestSource/Main";
  14:   
  15:  //In my TFS the PBI is called a Product Backlog Item it may be called 
  16:  // something else depending on the template you use.
  17:  const string ProductBackLogItem = "Product Backlog Item";
  18:  const string SprintBacklogTask = "Sprint Backlog Task";
  19:   
  20:  static readonly List<int> WorkItemCache = new System.Collections.Generic.List<int>();
  21:   
  22:  static WorkItemStore workItemStore;
  23:   
  24:  static TfsTeamProjectCollection tfsTeamProjectCollection;
  25:  static TfsTeamProjectCollection GetTeamProjectCollection()
  26:  {
  27:      var tfsTeamProjectCollection = new TfsTeamProjectCollection(
  28:         new Uri(TfsServer),
  29:     new System.Net.NetworkCredential(TfsUserName, TfsPassword));
  30:      tfsTeamProjectCollection.EnsureAuthenticated();
  31:   
  32:      return tfsTeamProjectCollection;
  33:  }
  34:   
  35:  static void Main(string[] args)
  36:  {
  37:      tfsTeamProjectCollection = GetTeamProjectCollection();
  38:   
  39:      //First we get the version control server.
  40:      var versionControl = tfsTeamProjectCollection.GetService<VersionControlServer>();
  41:   
  42:      workItemStore = new WorkItemStore(tfsTeamProjectCollection);
  43:   
  44:      // Second we get a list of merge candidates between our two branches (very simple)
  45:      var mergeCandidates = 
  46:          versionControl.GetMergeCandidates(FromBranch, ToBranch, RecursionType.Full);
  47:   
  48:      //Thirdly we get a list of workitems from our changesets (using some recursion)
  49:      var workItems = GetWorkItemsForChangesets(mergeCandidates);
  50:   
  51:      //And last we output these all to the screen
  52:      foreach (var workItem in workItems)
  53:      {
  54:          Console.WriteLine(string.Format("{0} {1}", workItem.Id, workItem.Title));
  55:      }
  56:   
  57:      Console.WriteLine("Complete");
  58:      Console.ReadLine();
  59:  }

We get our TFS collection first and then pull out our change sets from TFS using the VersonControlServer as you can see TFS gives us this functionality straight out of the box.

Next we go and get our workitems from TFS by iterating through our merge candidates and then workitems. If we find a PBI at this level we store them. If we find Tasks we then look inside them for more PBI’s

   1:  //In the example below I have deliberately not used LINQ statements so you can see what is happening more clearly.
   2:  // These lines of code can quite easily be condensed
   3:  static IEnumerable<WorkItem> GetWorkItemsForChangesets(IEnumerable<MergeCandidate> mergeCand)
   4:  {
   5:      var workItems = new List<WorkItem>();
   6:   
   7:      foreach (var itemMerge in mergeCand)
   8:      {
   9:          var changeSet = itemMerge.Changeset;
  10:   
  11:          var workItemsCollection = changeSet.WorkItems;
  12:   
  13:          foreach (WorkItem item in workItemsCollection)
  14:          {
  15:              if (ProductBackLogItem == item.Type.Name)
  16:              {
  17:                  if (!WorkItemCache.Contains(item.Id))
  18:                  {
  19:                      WorkItemCache.Add(item.Id);
  20:                      workItems.Add(item);
  21:                  }
  22:              }
  23:   
  24:              if (item.WorkItemLinks.Count > 0 && SprintBacklogTask == item.Type.Name)
  25:                  {
  26:                      WorkItemCache.Add(item.Id);
  27:                      var collectedWorkItems = GetProductBacklogItems(item);
  28:   
  29:                      if (collectedWorkItems != null && collectedWorkItems.Count > 0)
  30:                      {
  31:                          workItems.AddRange(collectedWorkItems);
  32:                      }
  33:                  }
  34:             
  35:   
  36:          }
  37:      }
  38:   
  39:      return workItems;
  40:  }

The code below will take a Task item and check its links for a parent item such as a PBI.

   1:  static List<WorkItem> GetProductBacklogItems(WorkItem workItem)
   2:  {
   3:      var workItems = new List<WorkItem>();
   4:   
   5:      foreach (WorkItemLink workItemLinks in workItem.WorkItemLinks)
   6:      {
   7:          //We only want parent items so we look for "Implements"
   8:          if (workItemLinks.TargetId > 0 && (workItemLinks.LinkTypeEnd.Name == "Implements"))
   9:          {
  10:              var tempWorkItem = workItemStore.GetWorkItem(workItemLinks.TargetId);
  11:   
  12:              if (ProductBackLogItem == tempWorkItem.Type.Name)
  13:              {
  14:                  if (!WorkItemCache.Contains(tempWorkItem.Id))
  15:                  {
  16:                      WorkItemCache.Add(tempWorkItem.Id);
  17:                      workItems.Add(tempWorkItem);
  18:                  }
  19:              }
  20:          }
  21:      }
  22:      
  23:      return workItems;
  24:  }

That's all there is to it. I’m pretty sure the code can be refactored to work better. I would state to be careful when using the code above. Searching through workitems can take a lot of time so if you do end up using it try it out on a test TFS or use your code within a debugger so can see what its doing.



.
Tags: TFS

Tuesday, 29 January 2013 13:03:54 (GMT Standard Time, UTC+00:00)  #    Comments [0]


# Monday, 28 January 2013

I was recently brought into a client site where they had made use of PSAKE to handle their build process. The build would be kicked off from the traditional Workflow in TFS using an Invoke Process. Everything was working perfectly until they spotted that when the build failed there was no way of viewing which unit tests had failed from within TFS. In short PowerShell was giving precious little to the TFS summary view.

The question was how could we get that rich logging information you got in the build summary when doing a traditional build using Workflow? Setting up a traditional build and observing how MSBUILD is called from TFS starts to shed some light on the situation

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe /nologo /noconsolelogger "C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj" /m:1 /fl /p:SkipInvalidConfigurations=true  /p:OutDir="C:\Builds\1\Scratch\Test Build\Binaries\\" /p:VCBuildOverride="C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj.vsprops" /dl:WorkflowCentralLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;BuildUri=vstfs:///Build/Build/111;InformationNodeId=6570;
TargetsNotLogged=GetNativeManifest,GetCopyToOutputDirectoryItems,
GetTargetPath;TFSUrl=
http://mytfshost:8080/tfs/Test%20Collection;"*WorkflowForwardingLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;"

 

In the above example I have highlighted the section I discovered is responsible for the summary view you usually see when kicking off a build from TFS. I discovered this with a bit of guesswork and some reflector usage to see what was going on inside MSBUILD. Googling for the WorkflowCentralLogger gives precious little back about how it works and more about the errors people have encountered with it.

Getting to the solution
You will be forgiven for thinking the answer to the problem is just adding the missing WorkflowCentralLogger switch (with arguments) to your MSBUILD command line in PowerShell/PSAKE. Sadly its not that simple. See the InformationNodeId in the above command line? This appears to tell the WorkFlowCentralLogger where it needs to append its logging information. Passing it into the Invoke Process was my first thought, the problem is you’re not going to find anything that will give it to you, I wasn’t able to find it anywhere.

So how do you get it to work then?
The answer is, you need to build a Custom Workflow Activity. A custom workflow activity will have access to the current Context. To use this you need to inherit the class “CodeActivity”. Its up to you how you use this Custom Workflow Activity, you have one of two ways.

  • Place it above the Invoke Process in your workflow, get the InformationNodeId and pass this as an OutArgument to the Invoke Process below it (not tested fully)
  • Or invoke Powershell from within the Custom Activity using a runspace and pass it the code context. (fully tested)
   1:   
   2:   
   3:  namespace MyWorkflowActivities
   4:  {
   5:      using System;
   6:      using System.Collections.Generic;
   7:      using System.Linq;
   8:      using System.Text;
   9:      using System.Collections.ObjectModel;
  10:      using System.Management.Automation;
  11:      using System.Management.Automation.Runspaces;
  12:      using System.IO;
  13:      using System.Activities;
  14:      using System.Collections;
  15:      using System.Globalization;
  16:   
  17:      using Microsoft.TeamFoundation.Build.Client;
  18:      using Microsoft.TeamFoundation.Build.Workflow.Activities;
  19:      using Microsoft.TeamFoundation.Build.Workflow.Services;
  20:   
  21:      public OutArgument<string> InformationNodeIdOut { get; set; }
  22:      
  23:      [BuildActivity(HostEnvironmentOption.All)]
  24:      public sealed class GetInformationNodeId : CodeActivity
  25:      {
  26:          protected override void Execute(CodeActivityContext context)
  27:          {
  28:          
  29:              context.TrackBuildMessage("Getting the Information Node Id", BuildMessageImportance.Low);
  30:              IActivityTracking activityTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking((ActivityContext) context);
  31:              string informationNodeId = activityTracking.Node.Id.ToString("D", (IFormatProvider)CultureInfo.InvariantCulture);
  32:              
  33:              context.SetValue<string>(this.InformationNodeIdOut, informationNodeId);
  34:          }
  35:      }
  36:      
  37:  }

The code above illustrates the first solution. Its a lot simpler and you’ll have to pass that node id to MSBUILD when you construct its command line in PowerShell. Line 30 and 31 is where all the magic takes place, I managed to find this line using reflector in MSBUILD. If you have never written a custom activity before Ewald Hofman has a short summary of one here

The diagram below illustrates where GetInformationNodeId (code above) sits just above the InvokeProcess which calls PowerShell.

 

image

The second solution, which I actually went with is slightly more complex and I’ll blog about how I did that in another article. You might be wondering what are the immediate benefits of one over the other? The beauty of going with the second solution is you can make use of the code activity context within your PowerShell scripts. So for example instead of writing your PowerShell events out to the host you could wrap that call in context.TrackBuildMessage (as illustrated on line 29 above).

I’d be interested to hear about other peoples experiences.



.
Tags: PSAKE | TFS

Monday, 28 January 2013 15:51:49 (GMT Standard Time, UTC+00:00)  #    Comments [0]