# Wednesday, July 31, 2019

I’ve posted this more for my own reference so I don’t forget it again. Entity Framework refused to add a migration and I did a lot of things to try and get around the error below

System.Data.Entity.Core.ProviderIncompatibleException: An error occurred accessing the database. This usually means that the connection to the database failed. Check that the connection string is correct and that the appropriate DbContext constructor is being used to specify it or find it in the application's config file. See http://go.microsoft.com/fwlink/?LinkId=386386 for information on DbContext and connections. See the inner exception for details of the failure. ---> System.Data.Entity.Core.ProviderIncompatibleException: The provider did not return a ProviderManifestToken string. ---> System.ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.

I eventually discovered the issue was caused by the Start Up project in Visual Studio not being set to the project I was trying to run the Entity Framework migration against. Basically to get around this right click on the project in Visual Studio and select “Set as startup project” then run your migration again and like magic all should work.


Wednesday, July 31, 2019 12:20:21 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

# Wednesday, April 18, 2018

I have used in memory EF tests, however they do have some limitations. In the past I have also mocked out the Entity Framework part of my tests but this can only take you so far especially if you want some comfort in knowing that your EF statements you put together are working correctly with your code.

So here’s the problem I am solving

  • I want a full integration test of code down to the database
  • I need the ability to reset the database being used each time so my tests are built up from scratch.

How I solved it.

  • I made use of MS SQL Express localdb this is a light weight version of SQL Server targeted towards developers, it only contains the minimal amount of files to start a SQL Server Database instead of needing a full SQL Server instance.
  • A base class used by my Integration test MSTest class.

Test Base Class

The class below is used by my MSTest class. It takes care of disposing of the database if it exists and creating it. It is by no means perfect, if there are better ways, I am open to recommendations. The class is also hardcoded to my db context in this case MyDBContext.

1 using System; 2 using System.Collections.Generic; 3 using System.Data.Entity.Infrastructure; 4 using System.Data.Entity.Migrations; 5 using System.Data.SqlClient; 6 using System.IO; 7 using System.Linq; 8 using System.Text; 9 using System.Threading.Tasks; 10 using Microsoft.VisualStudio.TestTools.UnitTesting; 11 12 namespace My.IntegrationTests 13 { 14 public class TestDatabase 15 { 16 protected static MyDBContext _dbContext; 17 protected static string databaseName = "MyTestDB"; 18 19 20 protected static string databasePath; 21 22 protected static string databaseLogFilePath; 23 24 protected static string dbConnectionString; 25 26 27 public TestDatabase(string databaseNameSet) 28 { 29 databaseName = databaseNameSet; 30 } 31 32 33 public static void SetUpTest(TestContext context) 34 { 35 databasePath = Path.Combine(context.DeploymentDirectory, databaseName + ".mdf"); 36 37 databaseLogFilePath = Path.Combine(context.DeploymentDirectory, databaseName + ".ldf"); 38 39 dbConnectionString = @"server=(localdb)\v11.0;Database=" + databaseName; 40 41 DropSqlDatabase(); 42 CreateSqlDatabase(); 43 44 _dbContext = new MyDBContext(); 45 46 // Basically we are creating a datbase on the fly and we want EF to init the database for us and to update it 47 // with the latest migrations. We do this by enabling automatic migrations first 48 // Then we give it our connection string to our new database we have created for the purpose. 49 DbMigrationsConfiguration configuration = new DbMigrationsConfiguration<MyDBContext>(); 50 configuration.AutomaticMigrationsEnabled = true; 51 configuration.TargetDatabase = new DbConnectionInfo(dbConnectionString,"System.Data.SqlClient"); 52 var migrator = new DbMigrator(configuration); 53 migrator.Update(); 54 55 } 56 57 private static void DropSqlDatabase() 58 { 59 //Note: We do not care if we get a SQL Server exception here as the DB file it is looking for is probably long gone. 60 try 61 { 62 SqlConnection connection = new SqlConnection(@"server=(localdb)\v11.0"); 63 using (connection) 64 { 65 connection.Open(); 66 67 string sql = 68 string.Format( 69 @"alter database [{0}] set single_user with rollback immediate; IF EXISTS(select * from sys.databases where name='{0}') DROP DATABASE {0}", 70 databaseName); 71 72 SqlCommand command = new SqlCommand(sql, connection); 73 command.ExecuteNonQuery(); 74 connection.Close(); 75 } 76 77 } 78 catch (System.Data.SqlClient.SqlException) 79 { 80 // Yeah yeah I know! 81 //throw; 82 } 83 } 84 85 private static void CreateSqlDatabase() 86 { 87 SqlConnection connection = new SqlConnection(@"server =(localdb)\v11.0"); 88 using (connection) 89 { 90 connection.Open(); 91 92 string sql = string.Format(@" 93 CREATE DATABASE 94 [{2}] 95 ON PRIMARY ( 96 NAME=Test_data, 97 FILENAME = '{0}' 98 ) 99 LOG ON ( 100 NAME=Test_log, 101 FILENAME = '{1}' 102 )", 103 databasePath, databaseLogFilePath, databaseName 104 ); 105 106 SqlCommand command = new SqlCommand(sql, connection); 107 command.ExecuteNonQuery(); 108 connection.Close(); 109 } 110 } 111 112 } 113 } 114

The MSTest Class

This is where we do the actual testing. Below I have created a hypothetical test checking to see if userA can get access to UserB’s organisation. In order for the test to work we need to create these organisations in our database first with their various users. When we do this we also make sure that the organisations don’t exist in the database first and if they do we delete them so we can start our test from scratch.

1 using System; 2 using System.Linq; 3 using System.Text; 4 using Microsoft.VisualStudio.TestTools.UnitTesting; 5 using Newtonsoft.Json.Converters; 6 7 namespace My.IntegrationTests 8 { 9 [TestClass] 10 public class SessionSummaryTests : TestDatabase 11 { 12 // We name our test database 13 public SessionSummaryTests() : base("SessionUnitTestDB3") 14 { 15 16 } 17 18 public SessionSummaryTests(string databaseNameSet) : base(databaseNameSet) 19 { 20 } 21 22 [ClassInitialize] 23 public static void SetUp(TestContext context) 24 { 25 SetUpTest(context); 26 } 27 28 /// <summary> 29 /// Hypothetical test case. We test making sure user A cannot get access to User B's organisation 30 /// </summary> 31 [TestMethod] 32 public void CheckIfUserACanAccessUserBsOrganisation() 33 { 34 // **** Start Test Scaffold 35 string userAccountA = "userA"; 36 string userAccountB = "userB"; 37 38 var orgs = _dbContext.Organisation.Where(x => x.OwnerEmail.Equals(userA) || x.OwnerEmail.Equals(userAccountB)); 39 40 41 if (orgs.Any()) 42 { 43 _dbContext.Organisation.RemoveRange(orgs); 44 } 45 46 47 _dbContext.SaveChanges(); 48 49 _dbContext.Organisation.Add(new OrganisationModel() 50 { 51 Name = "The Organisation A", 52 OwnerEmail = userAccount, 53 }); 54 55 56 _dbContext.Organisation.Add(new OrganisationModel() 57 { 58 Name = "The Organisation B", 59 OwnerEmail = outsideUser, 60 61 }); 62 63 64 _dbContext.SaveChanges(); 65 66 var orgA = _dbContext.Organisation.FirstOrDefault(x => x.OwnerEmail.Equals(userAccountA)); 67 68 var orgIdA = org.Id; 69 70 var orgB = _dbContext.Organisation.FirstOrDefault(x => x.OwnerEmail.Equals(userAccountB)); 71 72 var orgIdB = orgB.Id; 73 74 _dbContext.UserDetails.Add(new UserDetailsModel() 75 { 76 Email = userAccountA, 77 FirstName = "User1FirstName", 78 LastName = "User1LastName", 79 OrganisationId = orgIdA 80 } 81 ); 82 83 _dbContext.UserDetails.Add(new UserDetailsModel() 84 { 85 Email = userAccountB, 86 FirstName = "User2 FirstName", 87 LastName = "User2 LastName", 88 OrganisationId = orgIdB 89 } 90 ); 91 92 93 _dbContext.SaveChanges(); 94 95 // *** End of test scafold 96 97 // Our actual Test 98 var result = OrganisationMethods.GrantAccess(orgIdB, userAccountA,_dbContext); 99 100 Assert.AreEqual(false,result); 101 102 103 } 104 105 } 106 } 107

A Few Notes

You may have noticed that if I had used dependency injection I could have mocked the DB implementation if I had encapsulated it. But the purpose of the test was to ensure everything worked correctly right down to the database.

My TestDatabase class ignores an exception (sinful) I have had various issues here especially if the DB does not exist. Which is fine as we don’t want it to exist. But once again I am open to recommendations.

Tags: TDD | Visual Studio | VSTS

Wednesday, April 18, 2018 11:58:18 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

# Friday, December 8, 2017

This is more for my own reference but has proved rather useful. The problem it solves for me is that when automated Selenium tests run and fail its usually quite a task to figure out what went wrong. The best way around this is to take a screen shot of the issue. However taking a screen shot can end up with a folder on a tester server full of images you have to look through to find your test result image. The best option is to attach a screen shot your test takes when it fails to the results of the currently running test.

Below is how to do this with MSTest running in VSTS. All of our Selenium tests run as part of a timed VSTS Release Hub release twice a day.

1 [TestMethod] 2 public void BurnUp_86_CheckIterationPathsLoad() 3 { 4 bool isLoaded = false; 5 6 try 7 { 8 _selenium.ShowSelectedData(); 9 10 _selenium.ClickIterationPath(); 11 isLoaded = _selenium.CheckIterationPathsLoad(); 12 } 13 catch (Exception) 14 { 15 16 var testFilePath = _selenium.ScreenGrab("BurnUp_86_CheckIterationPathsLoadERROR"); 17 18 AttachScreenShotFileToTestResult(testFilePath); 19 20 throw; 21 } 22 23 Assert.IsTrue(isLoaded); 24 } 25 26 public TestContext TestContext { get; set; } 27 28 public void AttachScreenShotFileToTestResult(string screenShotPath) 29 { 30 try 31 { 32 if (!string.IsNullOrEmpty(screenShotPath)) 33 { 34 TestContext.AddResultFile(screenShotPath); 35 } 36 } 37 catch (Exception) 38 { 39 40 //We don't want to stop the tests because we can't attach a file so we let it go....let it go.. let it go... 41 } 42 43 }

Lets take a moment to step through the test method above called BurnUp_86_CheckIterationPathsLoad(). The test is contained in a try catch. All of my selenium functionality I keep inside a separate class so it abstracted and encapsulated from the actual unit tests, this helps greatly with maintaining my test as I only need to focus on the Selenium classes if the page layout for example changes. As part of this class it has a base class where I keep functionality common to all tests such as the ScreenGrab function found inside _selenium (more on this later).

If my test fails my try catch will catch the exception this is where I will take a screen grab of the issue and then allow the original exception to bubble up. But after I have taken a screen grab I attach this to the the current running tests results using the AttachScreenShotFileToTestResult function. You can see inside this function, I don’t care if it fails to attach the screen shot to the test results, I’d rather the rests of the tests continue to run. (I can almost sense the shock from my fellow developers Smile). The key piece of functionality to take away here is TestContext.AddResultFile. This is given the path to where we saved our screen grab in the previous step.

So what about that screen grab functionality?

Selenium has had the ability to take screen shots for a while. Below is the function in my _selenium class that takes the screen shot using the current version of the IWebDriver.

1 public class SeleniumBase 2 { 3 protected IWebDriver driver; 4 5 public string ScreenGrab(string test) 6 { 7 string baseDirectory = "C:\\UITests"; 8 string screenGrabs = Path.Combine(baseDirectory, $"{DateTime.Now:yyyy-MM-dd}"); 9 10 if (!Directory.Exists(baseDirectory)) 11 { 12 Directory.CreateDirectory(baseDirectory); 13 } 14 15 if (!Directory.Exists(screenGrabs)) 16 { 17 Directory.CreateDirectory(screenGrabs); 18 } 19 20 21 //Create these folders if not present 22 string filename = Path.Combine(screenGrabs, $"{test}-{DateTime.Now:yyyy-MM-dd_hh-mm-ss-tt}.png"); 23 24 try 25 { 26 Screenshot ss = ((ITakesScreenshot)driver).GetScreenshot(); 27 ss.SaveAsFile(filename, System.Drawing.Imaging.ImageFormat.Png); 28 29 30 } 31 catch (Exception) 32 { 33 34 //We swallow the exception because we want the tests to coninue anyway. Taking a screen shot was just a nice to have. 35 return string.Empty; 36 } 37 38 39 return filename; 40 } 41 }

So what does the result look like?

Below are the results from one of our automatic test runs that are run for us by Visual Studio Team Services Release Hub. If I click on the test that has failed you can see in the results section an attachment has been added which is the screen grab we took when the test failed.


Got a better way of doing the above? Or would like to recommend some changes? Don’t be shy leave a comment, I’d love to hear from you Smile

Tags: Selenium | TFS Tools | VSTS

Friday, December 8, 2017 1:41:01 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]

# Sunday, January 22, 2017

Yesterday I migrated one of our TFS collections to VSTS using  Microsoft's migration guide for moving from TFS to VSTS . I won’t lie, it was a pretty long process and it took a lot of going back and fourth to make sure I fully understood the guide which is a PDF 58 pages long. The guide comes with several checklists and things you need to check and prep before your migrations.

A very rough outline of what happens is that you run a check against your TFS using the tool provided to ensure everything is exportable, if there are problems you go about fixing them following suggestions from the tool and then running the check again until you are ready to go. Next you you will run a prep that will generate some files you will need to map your users across followed by making a database backup as a DACPAC package and entering your import invite codes (provided by Microsoft). These are then uploaded to an Azure storage account and you kick off the migration process which uses these assets to import your data into a brand new VSTS instance.

I won’t go into details about how to do the migration as this is covered in the guide, however I will highlight some things you should take into account before you migrate from TFS to VSTS which is done using a tool provided in the guide called the TFSMigrator.

Azure Active Directory

You are going to have to make sure you have this in place or have at least thought about it. If you use Active Directory in your organisation a good thing to look at is replicating this to Azure, your migration is going to need this. If you are not using Active Directory but just accounts on the box as I did for this migration, you can easily map these across to Azure Active Directory accounts. If you have Office 365, then you already have access to an Azure Active Directory setup (depending on your subscription) and you can make use of this. The reason Azure directory is important, is that this is how VSTS will authenticate your users once you have migrated across to VSTS.

Plan for some downtime to make backups

Even when doing a test migration as I did, you need to plan for some downtime. One of the reasons for this is that you will need to generate a DACPAC project of your TFS Collection. In order to do this you have to take the TFS Collection Offline and then detach it from TFS. If you have not done this before you may be put off by the ominous warnings from the TFS Admin Console asking you to tick a box stating you have made a backup of your TFS databases.

After you have detached your TFS Collection and made a DACPAC of it, you can then reattach your collection so your team can continue working as usual.

Learn what a DACPAC is

Yes I had never used one before. The guide will give you some details with a sample command line to use to create one. Effectively DACPACs are short for Data-tier Application Package. These are generated from SQL Server itself. It is basically a way of exporting your whole TFS Collection database with everything that it needs to be re-created. “tables, views, and instance objects, including logins – associated with a user’s database”. The DACPAC package will be uploaded to an Azure storage blob that the migration tool uses.

Learn about Azure Storage Accounts and SAS

While I have used Azure Storage Accounts before , I found this part quite complicated and it took me a while to get it right. Basically the DACPAC package your create from your TFS Collection database gets uploaded to an Azure Storage account along with a mapping file for user accounts. The hardest part I found was having to workout how to create an SAS token URL to the where I had stored these in an Azure storage account. The guide will provide you with a link to some PowerShell you can sue that will generate this URL for you. I am not sure why Azure couldn’t create this link for you (I did try) but eventually used the PowerShell provided that worked first time.

Azure PowerShell tools

Make sure you have the Azure PowerShell tools installed, you will need these for running some PowerShell to generate an SAS token url to your Azure Storage account (see above).

Final Notes

I would recommend reading the guide fully before getting started. Also note that currently you have to request an import code in order to use the service. You will get two of these, one is for a dry run to ensure it works and the next one is for your production import. This is when you are fully committed and feel confident it all went to plan in the dry run.

Tags: TFS | VSTS

Sunday, January 22, 2017 11:18:16 AM (GMT Standard Time, UTC+00:00)  #    Comments [0]

# Thursday, December 8, 2016

It has been a while since I last blogged about MS Visual Studio Team Services Release Hub. The last time I blogged about Release Hub the product was very much rough around the edges and quite a few of its parts were in early preview.

Release Hub has matured quite a bit since then, however deploying to production or test environments in the real world can be considerably different compared to the examples of using Release Hub available online. The items I find many customers I go and see have difficulty with is where the documentation is very much sparse or spread around many older versions of the product is:

  • Tokenisation – its surprising how difficult this can sometimes be and documentation following the whole process from start to finish isn’t readily available. (I will cover some of this in this article)
  • WINRM – Setting up Windows Remoting  which is used for many of the deployment tasks on a test environment is easy. On a production environment where everything has to be very carefully managed through change control processes this can be more challenging.  (I hope to cover some of this in a future article)

The sample I have put together here is more for my own reference. But if you have any suggestions or improvements I would love to hear from you. I will try to expand on this article a bit more with more examples that don’t fit the norm in future blog articles.

The scenario I am going through here is an ASP.NET website that is created from a build and that same build needs to be deployed to more than one environment with its configuration changed for each environment.

The steps involved will be to

  1. Prepare an ASP.NET project for web deployment and tokenisation
  2. Prepare a build to produce the assets needed for deployment
  3. Create a Release that consumes the build mentioned above and replaces configuration variables based on the environment being deployed to

Preparing an existing ASP.NET web project for Release Hub

Before you can deploy a web project you need to prepare your project for deployment.  You may have already used this functionality to deploy directly to an Azure website from Visual Studio or to an on premise server. This functionality can also be used to help create deployment packages that we will use later with Release Hub.

Step 1
Right click on your web project and select Publish. Don’t worry this wont publish your site but it will enable us to setup a deployment profile for it that we will use later.


Step 2
From the dropdown that appears select “New Custom Profile” and type in a name for your new profile and select Ok. In this case our profile is called “Website1WebPackage”


Step 3
In the dropdown box that appears next select “Web Deploy Package”, here type in a name for your deployment package. We will be using MS Web Deploy to deploy our site later but in order to do this we need to set our site up to create a deployment package. In addition to this we are placing in a token called __SITENAME__ this will be replaced later at deployment time when we actually deploy our application. I will talk more about this later.


Step 4
Here the publish wizard will display any database connections string which you can also replaces with tokens of your own. Tokens start with “__” and end with “__” and are in capitals.


Step 5
You can now hit the publish button. All this will do is create a webdeploy zip package in the root of your project . You should now see the following files


We are only really interested in the website1.webpackage.SetParameters.xml and in website1.webpackage.zip. These files when using the correct switches on your build, will be generated each time. if you open up the parameters file you will notice it contains the tokens we created earlier.


Step 6
In the root of your web project create a parameters.xml file . You will see in our parameters file we are using an xpath match to replace settings in our web.config file. The scope is basically looking for a database connection string called DefaultConnecction and we are saying that when you find that value replace it with __DBCONNECTION__  we are doing the same with another key in our web.config called MailAddress.

<?xml version="1.0" encoding="utf-8" ?>
<parameter name="DefaultConnection" description="DB Connection" defaultValue="__DBCONNECTION__" tags="">
  <parameterEntry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add[@name='DefaultConnection']/@connectionString"  />

<parameter name="EmailAddress" description="MailAddress" defaultValue="__EMAILADDRESS__" tags="">
  <parameterEntry kind="XmlFile" scope="\\web.config$" match="/configuration/appSettings/add[@key='MailAddress']/@value"  />

You can see how the parameters above relates to the web.config below



Step  7
Publish your project again by right clicking on the project and selecting publish and then the profile you created earlier. If you know check the setparamters file. You will notice the new tokens we added in the parameters.xml file are also in here. This file automatically updates with these tokens when you run the publish profile and is key to how we can replace variables in our configuration files.


We are now ready check in our code and to create a build. Ensure you check in the parameters.xml file and your new publish profile (highlighted below)



Create a build

Step 1
You may already have a build for your solution, if so you can alter this build to produce the assets you need for deploying your solution.  Below I have setup an out of the box Visual Studio build pointing to my solution however I have added some arguments to my build.


Those arguments are:

/p:DeployOnBuild=true;PublishProfile=Website1WebPackage /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true

Note above that our publish profile is set to the one we created earlier in the tutorial when we prepared our ASP.NET project with a publish profile we called it “Website1WebPackage”. We are also telling MSBuild that we want it to create a package for us and that we want everything to be in a single file.

Step 2
Click on the Copy Files to task and in the contents textbox you will see we have two entries. We are telling this task that all we want from the finished build is website1webpackage.zip and the website1webpackage.SetParameters.xml file we covered in the earlier steps. These files are automatically generated by our build after we had setup a publish profile on our build to create them in the earlier steps.


Step 3
Run your build and at the end of the build if you look at its artefacts you should have the following files. We will use these in our release to help with tokenisation.


Create a Release

Step 1
Go into Release Hub and create an empty release.


In this example I am using the build I created in the previous steps.


Step 2
If you don’t already have it, you will need to go to the VSTS market place and select a Tokenisation task. I like to use the following https://marketplace.visualstudio.com/items?itemName=TotalALM.totalalm-tokenization but there are several more you can choose from.

Step 3
Add your tokenisation task. In mine I have set the working directory of my solution as the target path using the following VSTS token $(System.DefaultWorkingDirectory). I have set the Target Filenames to the SetParameters file that our build we created in the previous steps is generating.


Step 4
In the environment I was working we weren’t allowed to use Windows File Copy as it was considered insecure. However we did have WINRM available to us. Provided you have PowerShell 5 installed it is possible to copy files from a PowerShell command line to your destination server. You can skip this task and use the Windows FileCopy task if this is open on your network.


In my example I have done just that using the PowerShell task. The PowerShell I use is below and tokenised by variables stored in VSTS in the variables tab.

$password = ConvertTo-SecureString "$(password)" -AsPlainText -Force
$cred= New-Object System.Management.Automation.PSCredential ("$(username)", "$password")
$session = New-PSSession -ComputerName myserver01
Copy-Item '$(System.DefaultWorkingDirectory)\WebApp1 Build\drop\WebApplication1\website1webpackage.zip' -Destination 'c:\drops' -ToSession $session
Copy-Item '$(System.DefaultWorkingDirectory)\WebApp1 Build\drop\WebApplication1\website1webpackage.SetParameters.xml' -Destination 'c:\drops' -ToSession $session

I am basically using PowerShells Copy-Item command here to get the files to the server for me to a folder on the c drive of the server called “drops”. I get the path to the files by temporarily making use of a Windows File Copy task to show me the path variables and then delete it after


Step 5
Now that I have setup my WinRM file copy I can then use the IIS WInRM Task to deploy to my web server


In the example I am using the package files that were copied to the web server in the previous step.

Step 6
Remember those tokens you setup in the previous steps? Now is the time to start giving them values. Click on the Variables tab and start putting some entries in for those tokens. You will also notice that the username and password we use in our release tasks are also stored here and we can refer to them as $(username) or $(password).


Step 6
You should now be able to run your Release and deploy.

Step 7 (Optional)
If you have more than one environment, you can clone the existing environment and then replace the server names with the next environments server names.


Tags: Release | VSTS

Thursday, December 8, 2016 11:25:19 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]

# Tuesday, October 4, 2016

I ran across this error when installing a new Release Agent and it got to the Azure File Copy stage. Many of the solutions on the Internet to this problem point to it being caused by incorrect times on the Agent machine or the target server. However all servers had the correct time and were in the same time zone.

My problem appeared to be caused by my token Endpoint from VSTS connecting me to Azure. When I renewed this endpoint certificate the Azure File Copy task magically worked. The only difference I could see from the previous agent I had, was that my new Agent was a new Virtual Machine compared to the older one which was a Classic Azure Virtual Machine.

I hope this helps someone.

Tags: VSTS

Tuesday, October 4, 2016 5:46:37 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

# Friday, July 22, 2016

I started using Microsoft Fakes for some code I was not able to encapsulate and use Interfaces for (my preferred approach) . One of the issues I had was the documentation didn’t appear to be all that good compared to the wealth of information available for RhinoMocks and MOQ especially around Instance methods.

Here is my scenario. I was calling an external class from my code and wanted to test some code that handled an error if the code was called on the second attempt. In Rhino Mocks or MOQ this type of expectation was very easy to code but with Microsoft Fakes the majority of examples appeared to be around static methods.

I knew you could use the AllInstances method for all Instances however I was not clear how I could have an instance do something different when called the next time. My approach was to store the amount of times it was called in a variable and then do something based on that variable count.

Anyway to cut a long story short here is my approach.

    using (ShimsContext.Create())
		int shimCalled =0;
		ShimExternalServiceHttpClientBase.AllInstances.GetTransactions =(x,y,z)
				 return Task.FromResult(new List<TransItem>() {new TransItem() {Id = 99}, new TransItem() {Id = 33}});
			return Task.FromException<List<TransItem>>(new TransException());
		var transcalculator = new TransCalculator();
		var results = transcalculator.CalculateResultsForBatch(1);

This approach works well for me. Basically on the first call I want data to be returned and on the second call I want to raise an error to check that my code can handle this type of error correctly.  I must also point out that this method I am shimming uses async calls hence the Task.FromResult and Task.FromException being used.

I am not entirely sure if the above is the best approach to use however I was unable to find another way I could use an instance method in this way.

Tags: Shims | TDD | Visual Studio

Friday, July 22, 2016 5:09:08 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

# Wednesday, November 25, 2015

We’ve been on the preview of “Release” for a while now and have been using it for several deployments. Its a great product when you have figured out how it works and as the documentation improves this should become easier.

Currently we are using Release to publish the same ASP.NET MVC web application to 3 websites with different parameters in each config (replaced with tokenisation) and to drop a packaged version of our web app for download from an external website.

Sounds pretty cool doesn’t it? Well it took a lot of getting around and I am keen to hear back from anyone who may have better ways of handling the configuration file part. Much of what we have done so far has been trial and error mainly between my colleague Richard Erwin and myself.

In this article I will cover deployment to an Azure website. In a later article I will cover how we deployed to an IIS web server hosted on an Azure virtual machine followed by wrapping up software for download from your website. Just as a warning we use a self hosted Release Agent to do our deployments and this example will probably only work with a self hosted Release Agent.

Update note:
For the purposes of this article, I forgot to add that we made use of the Custom VSO-Tasks for Zip, Unzip and Tokenisation which you will need to install before hand.

Creating a parameterised deployment of an ASP.NET MVC application with Release to an Azure website


In the image above you can see the steps involved in Release.

Setting up a vNext Build with parameters

The first step we need to do before we get to Release is to create a Build in VSO that

  • Uses a parameterised xml file with tokens to replace the parameters in your web.config (or other configurations files).
  • Create an MSDeploy zip file as its output

If you have done all of the above and just want to get to the Release bit, scroll down to Setting up Release.

Custom Parameterized XML file with Tokens
As you can see in the image below I have a parameters.xml file in the route of my MVC application if you have not used this type of file before you can find out more about it here. Its a pretty standard part of MSDeploy which is what you are using behind the scenes.


The only difference in our file is that we have replaced the default values with tokens. Tokens are represented with the following syntax __MYTOKEN__ . These tokens will be replaced later by a process in a our Release workflow.  We are basically telling this file to replace the parameters (represented by the match statements) in our web.config and visualisation.config files above by the defaultValue which in this case we have placed our token in for replacement later by Release.

Create an MS Deploy zip file as the output of our build.
In VSO create a new vNext Build using the Visual Studio template


On the “Visual Studio Build” Step select your solution (you will be prompted to locate this in TFS version control (or GIT) when clicking the button with the 3 dots next to the option.


In your MSBuild arguments you will need to do the following

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true

The DeployOnBuild and WebPublishMethod arguments tell the build to create the MSDeploy packages for us. This is simply a zip file that MSDeploy can use to deploy to an IIS box or an Azure website.

Further down in Publish Build Artifacts we tell the build what we want as an output from this build. In our case all we want is the MSDeploy zip file. In this case we represent this by asking the publish step to find a zip file for us in the build output using the following syntax “**\*.zip” (see image below)


Setting up Release
For the purposes of this article I am doing an Azure Web deployment first but will follow up with another article that will go through IIS deployments.

You will need to install the  Custom VSO-Tasks for Zip, Unzip and Tokenisation before continuing.

A note about Release Agents
This article makes the assumption that you have at least some familiarity with Release. In this section we are using our own release agent installed on a virtual machine. A release agent is basically the same concept as how a build in VSO can have its own agent that you can host yourself instead of using the hosted build agent. If you do not have your own Release Agent setup there is a guide here on how to do so. You basically run a PowerShell script on a machine you wish to act as your release agent. If you are experimenting you can even use your own desktop or laptop as a release agent. Release Agents will need Internet access and be located where they can see the target environment you are deploying to.

Go into your VSO project and select the Release tab. Create a new Release in this case we are doing an Azure Website Deployment.


The default release you will see only has two tasks inside it. For the purposes of our setup we had no need for the Visual Studio Test which can be deleted.


In our release we added the following tasks (see the image below) which I will go into more detail below.

Its a good idea now to set your environment to use your hosted Release Agent if you haven’t done so already. You can do so by clicking on the 3 dots next to the environment name and selecting Agent Options and setting the Default queue to your Release Agent.


To select the contents of the vNext build you created previously select the Artifacts tab and select “Link an artifact source” button. Basically you are releasing the contents of a build.


Unzip Task
Select the Environments tab again and Add a task this task will be under utilities and be called UnZip. You can drag this task to the top of the list by holding down on it with your mouse.

In your Unzip Task you can select the zip file that is provided by your build output (this is the step we did previously in creating a build above). The output of our build is a zip file used by MS Deploy. We are just telling the Unzip task to unzip this. Note you will have to run at least one successful build to see the contents of your build using the navigation used by the 3 dots button next to the option.


The target folder is a folder on our build agent. The above path comes from clicking on the 3 dots next to the target folder. In this case I have only gone one folder deep and placed my own folder name in there called “VSO” which the zip task will create and unzip the contents of the package to.

Tokenisation: Transform file
Add a tokenisation task in the same way you added an Unzip task above.

Remember the previous step we did above “Custom Parameterized XML file with Tokens”? All we are doing is telling our Tokenisation task to find that parameters.xml file  we created in that task in the folder that was created in Unzip task above. This task will replace the tokens in our parameters.xml file with our custom variables (you can read more about where to set these further down).


Batch Script
This is probably the least elegant part of my solution and I am open to any suggestions people might have to improving it. In order to make the MS Deploy package work again we need to zip it up, unfortunately we can’t just use the Zip task that is available to us as MS Deploy will for some reason ignore any zip file that was not created with MS Deploy! To get around this we had to install MS Deploy on the build agent box (this is why I am using our own build agent).


This batch script task basically tells Release to execute the batch file located on my build agent server with three parameters. The script is listed below and lives in a path on the build agent indicated in the image above.

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"  -verb:sync -source:archiveDir=%1 -dest:package=%2 -declareParam:name="IIS Web Application Name",defaultValue=%3,tags="IisApp" ^
-declareParam:name="IIS Web Application Name",type="ProviderPath",scope="IisApp",match="^.*PackageTmp$" ^
-declareParam:name="IIS Web Application Name",type="ProviderPath",scope="setAcl",match="^.*PackageTmp$"

All the batch file does is takes the 3 arguments which are

  • The working folder we unzipped our website to and ran the tokenisation task on
  • Where we would like to place our new MSDeploy package and call it, in this case its VBX.zip which we are placing in the working directory of our agent. In the example above ours is: $(Agent.ReleaseDirectory)\VisualisationBoard2015\VSO.zip
  • The name of our IIS website.

It uses these to recreate the MSDeploy package for us again.

Note arguments are separated by a space and each argument is placed inside quotes.

Azure Web App Deployment Task
Finally we get to the last task. If all went well in the batch file task above, we should be able to tell this task to use the MS Deploy package we just created in the previous step. In this case it is



Note if you are unsure how to setup your Azure subscription and website on the Azure deployment task you can find out how to do so here . If the Azure web App does not exist Release will create it for you.

Where do I put my parameters for tokenization?
While still in your Release click on the Configuration tab. This is where you enter the tokens you wish to replace in your parameters.xml file for example the parameter __AUTHENTICATION__ is simply represented by the name AUTHENTICATION without the underscores. The tokenisation task will look for these here.


The tokenisation task will also check your environment specific variables which can be found on each configured environment.


The beauty of this is that you can have a different set of variables per environment.

Once you are done you can now kick off a Release and see if it works!

Trouble Shooting
We have found when trouble shooting release. It helps to have access to the Release Agent machine you are using so you can see what is happening in its working directory. Usually an issue could be down to mistyping a name or getting a directory path wrong.

I am keen to hear back from anyone who has a better way of using release for tokenized website deployments.

Tags: TFS

Wednesday, November 25, 2015 2:31:53 PM (GMT Standard Time, UTC+00:00)  #    Comments [4]

# Monday, July 20, 2015

Recently I have been doing a lot of coding, and we’ve been working on some exciting things at RippleRock from Lean Coffee Table, TFS Rippler and some advanced HTML 5 charting tools. Our working atmosphere at RippleRock is pretty relaxed with a good work-life balance. As our consultants are usually spread through out the UK (and even some parts of Europe and occasionally India), we make a lot of use of technology to stay in touch with each other. We are all very driven and passionate people about what we do which also makes working remotely from each other (when we need to) much easier to do.

Morning Ritual
The morning ritual is very much the same as a co-located team, we have an early morning stand-up. I personally feel that the day hasn’t started properly unless we have one of these.  One of us will open our task board and display it to the rest of the team and we will talk about what we have done and what we will be working on next. 

Remote Pair Coding
We stay in touch using Skype for Business, Skype or Google Hangouts depending on what works best for our situation. Skype For Business is our go to app for remote pair coding. It enables either party to take control when working together. The developer who has control of the keyboard listens to the observer who directs. We can switch control by giving the observer control using Skype for Businesses “Give Control” functionality. We don’t always Pair Code when working remotely as it can be quite taxing being on a remote call for several hours, however as a team we can spot bits of work we believe will be better worked on while pair coding and when we do this, we end up with some very good results.

Feedback Loops
Because constant feedback is so important we focus on small chunks of work that are regularly committed to source control, run in our CI build and deployed to a server. This ensures we have constant feedback for our morning stand-up's. This also encourages us to try things out quickly if we are unsure on what is the best approach, because our feedback loops are so short we can afford to fail quickly and this way choose the best solution that works.

Code Reviews and refactoring
This pretty much works in the same way as co-located teams. Sometimes we pair when going over code if we need to figure out what was the idea behind some decisions. Other times code is changed and shelved as a suggestion to the developer who is having code reviewed and they can look over the suggested changes, compare and incorporate them to ensure they understand them.

Tools are not a replacement for process they only help facilitate the process especially for when we are are working remotely. The key tools we have found are as follows

  • A shared remotely viewable board of work. Any kind of web based board be it TFS, Jira or LeanKit will help here to make work visible.
  • Source Control basically goes without saying how important this is.
  • Remote conferencing tools like Skype, Skype For Business or Google Hangout are important. Those that enable remote desktop control are even better.
  • CI Builds kicked off after check-ins provide a fast feedback loop.
  • Wiki or central document area. We find a Wiki for quickly jotting up helpful documentation essential and it encourages use because of how easy it is to use.

Tags: Agile | Remote Working

Monday, July 20, 2015 1:35:47 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

# Wednesday, June 24, 2015

I wrote this article more as a reminder to myself on the process I need to go through to make a web application written in ASP.NET (MVC) that uses the TFS API to actually work. I have done this several times now but keep on forgetting some of the key information. Some of the errors you may get if you haven’t set this up correctly are.

Error HRESULT E_FAIL has been returned from a call to a COM component.


There are two things you need to set correctly and they are your Web.Config and IIS.

The first port of call is to setup the following in your web.config. Basically we are saying we want to use Windows authentication in our app and to turn on impersonation.

1 <system.web> 2 <authentication mode="Windows" /> 3 <identity impersonate="true" /> 4 5 <authorization> 6 <deny users="?" /> 7 </authorization> 8 </system.web> 9 . 10 . 11 . 12 <system.webServer> 13 <validation validateIntegratedModeConfiguration="false"/> 14 . 15 . 16 </system.webServer> 17

IIS Settings
The rest of the settings are dealt with in IIS.

In IIS click on your website and then select Authentication from the Features menu. Set these to the following (as per the image). Basically ASP.NET Impersonation, Windows Authentication are set to enabled. Anonymous should be set to Disabled.


App Pool Settings
Go to advanced settings on your App Pool, one thing you may need to set here is “Enable 32-Bit Applications” if you are working with the TFS Client API (this can be found under (General))

Scroll down to Process Model and find an identity section. This for a newly created app is usually set under the App Pool Identity account. This needs to be set to either a domain account that has access on the box or I have seen the local system and local service accounts also work here. However I believe this is only the case if you have set TFS to run under one of these as a service. In my case I have used an AD account that has access to the box. The next important step here is to set “Load User Profile” to true. Setting this appears to be critical especially when working with the WorkItem Tracking Client. I believe it needs to create a cache on disk when it does this. Not setting the Load User Profile may prevent it from doing this.


Tags: TFS

Wednesday, June 24, 2015 1:04:47 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]