Jira API returning “Unauthorized; scope does not match”

Blogging about this for my own reference as I spent quite a bit of time stuck by the issue checking scopes over and over again and not understanding why sometimes creating an issue via the Jira API via 3LO Authentication worked and others it didn’t

Below is the URL I was making an HTTP post to with the scopes read:jira-work and write:jira-work and getting a 401 error

https://api.atlassian.com/ex/jira/{cloudid}/rest/api/3/issue/

The solution is actually quite simple, remove the trailing slash!

https://api.atlassian.com/ex/jira/{cloudid}/rest/api/3/issue

And all of a sudden it works! I am guessing the Jira API sees the slash and believes its another API scope that you do not have access to.


Schedule start-up and shutdown of Azure VMs and the InvalidAuthenticationTokenTenant error

Azure like AWS can get pretty expensive, especially if you are using a big virtual machine with lots of RAM and CPUs. If you can’t go for a lower spec machine or a reserved instance, your next best bet is to turn the virtual machine off when you are not using it.

Note: If you are just looking to solve the “The access token is from the wrong issuer” error scroll down to “Correcting the InvalidAuthenticationTokenTenant error”

Turning Virtual Machine’s off on a schedule is quite easy, you’ve been able to do this for years with Auto-shutdown.

The auto shutdown blade on an Azure Virtual Machine

The problem is remembering to start the machine up the next morning when you need to use it.

This is where Automation Tasks comes in. They can be used to turn off your machine and turn it back on again. You can also use them to do a lot more.

They are available from your virtual machine in the Azure Portal. Select them and then click on Add task.

There are 4 templates to choose from here

We’re going to choose the Power Off VM task (its identical in layout to the Start VM task)

Here you will need to create a connection which is basically a login to your Azure environment. Clicking on create will ask you to connect to your Azure account, this part may produce an error when you run your task (more about this later). The same will happen when you click on Create for Office 365. This is so you can connect to your 365 account for sending emails for when the task is complete. After you have done this you can click on the next step.

Configuring your task

Configuring the task. I have set

  • Task Name – This is self explanatory (the name does not allow spaces)
  • Stop Time – I have set the task to run at 7pm
  • Timezone – This is the time zone you are in in my case this is GST
  • Interval – This is how often you want to run the task in my case I have selected 1
  • Frequency – I have selected daily
  • Notify Me – This will notify me with the email address below when the task is complete.

So basically I have set my task to reoccur once a day at 7pm.

Click next and you can review the task before clicking create

After your task has been created it will appear in a list

The next step is to repeat the same process again for a start-up task by clicking on “Add a task” and selecting “Start Virtual Machine” as your template. All you need to do now is enter the time you wish your virtual machine to start-up in the morning. In this case I have selected the next day and 8am in the morning.

But… we’re not done yet!

Correcting the InvalidAuthenticationTokenTenant error

Depending on your setup, if you don’t carry out this step you might get the following error when your task tries to run.

{
  "error": {
    "code": "InvalidAuthenticationTokenTenant",
    "message": "The access token is from the wrong issuer 'https://sts.windows.net/X8cdef3XXXX/'. It must match the tenant 'https://sts.windows.net/Xbe5b03XXXX/' associated with this subscription. Please use the authority (URL) 'https://login.windows.net/Xbe5b03XXXX' to get the token. Note, if the subscription is transferred to another tenant there is no impact to the services, but information about new tenant could take time to propagate (up to an hour). If you just transferred your subscription and see this error message, please try back later."
  }
}

I believe this usually happens when the account you use with Azure is used in more than one Tenant and the wizard in the previous tutorial just selects the default or first tenant it finds.

Took me a while to figure this one out. To correct it go back to the Tasks blade of your Virtual Machine

Next to one of your scheduled tasks select the 3 dot menu option and from the drop down menu select “Open in Logic Apps”.

This will open the Logic Apps Designer. Expand the “Start virtual machine” operation, depending on which one you are editing this may be called “Power off virtual machine”. Click on the “Change connection” link at the bottom (see image).

Click on “Add new” in the dialogue box that appears.

Now select the Tenant you wish to use and select “Sign in”. In my case I selected the tenant the user account and my VM was in. This will provide a login box for you to sign in with an account. In this instance it was the account I used to sign into the Azure Portal with. This will ensure the correct tenant and user account is used together and hopefully avoid the above error. After you are done hit the Save button in the logic app designer window (top left hand corner of your screen).

You can also test if your task will run correctly by running it directly from the Logic App Designer by clicking the “Run Trigger” button and then selecting “Run”.


Mural and Azure DevOps whiteboarding

I’ve always been a big fan of helping teams visualise work better. In the old days one method I loved to use was to print out product backlog items or tasks and place them next to the bits architecture diagrams or wireframes we were currently working on, on the team white board. It helped us visualise our work and sometimes it would help us ask more questions and realise we may have missed tasks we needed to get work done.

Much later when working remotely with teams I would employ the same mechanism again. We would use tools such as Lucid charts for virtual whiteboarding and I would copy and paste in the tasks/product backlog items and place them around the various parts we were working on. It was only when our company started using Mural a while back and they added integrations with Azure DevOps and Jira that I realised we could now import elements of our product backlog into Mural diagrams we were working on. If you haven’t tried Mural yet I highly recommend it, its a great tool for whiteboarding with remote teams.

We initially started by importing the backlog items that were associated with the work we were doing. Mural make’s this very easy by just right clicking on the background of your Mural board and selecting the Import Azure DevOps feature that appears in the context menu (you can do the same with Jira). In the dialogue box that appears you can select the Bugs, Tasks or Product Backlog Items you wish to import. You can search for them or use existing queries you may have created in Azure DevOps.

After you have imported your backlog items, you can then drag them around the screen like other Mural objects and they contain a link back to your backlog item in Azure DevOps along with its current status.

Below is part of our roadmap for some new features we are looking to launch for Lean Coffee Table. You can see the product backlog items we imported onto the board and placed them around our launch notes.

It also works both ways. If you are drawing a diagram of your solution and want to create tasks around that diagram as you go (made up example below).

You can create your tasks/backlog items and publish them back into Azure DevOps, Jira or GitHub by right clicking on the sticky note we made in the example below and selecting where we want to export our item.

You can see below we can now choose where in ADO we want to create our item. In this case we have chosen a User Story type.

And now it is connected to ADO with its current status

NOTE: Azure DevOps and Jira appear to work both ways. However you only appear to be able to export items to GitHub Issues but not import them.

So many teams are now hybrid or work fully remotely, tools like Mural can help facilitate those teams. What tools do you use to help facilitate working with remote or hybrid teams? I’d love to hear from you.


Microsoft Entra, check your Sign-in logs for SMTP Auth

If you’ve had your Microsoft 365 account for a while, you may have had SMTP Auth enabled by default. Most email clients no longer need SMTP Auth enabled, disabling it can also reduce your attack surface significantly. I have seen audit logs in Microsoft Entra tenants where there are relentless attacks via SMTP Auth regardless of if you have Multifactor Authentication methods setup.

You can check these by going to Microsoft Entra Admin Center selecting Users>Sign-in logs and filtering by Failure. In the columns option add “Client App” so you can see which client this failed on. If you see SMTP, you know this is being used as an attack vector.

Image showing Entra user admin page with sign-in logs screen. demonstrating how to add more columns and filter by failed requests.

You can block SMTP Auth on individual user accounts from the Microsoft 365 Admin Centre. Select Users > Active users select the user select the Mail tab and then Manage email apps.

Shows a list of email apps that can be disabled on a users account in Microsoft 365

Or if you are sure you no longer need SMTP in your organisation (ie think printers that email scans to you), you can turn off SMTP Auth for your organisation all together in the Exchange admin center under Settings > Mail flow settings. You will find “Turn off SMTP AUTH protocol for your organisation” under the Security heading.

You can read more about the Depreciation of Basic Authentication in Exchange Online here https://learn.microsoft.com/en-us/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online


SignalR not firing all client events

This is more for my own reference, so I can come back to it later, hopefully its also helpful to others.

I saw some odd behaviour with SignalR where a client was not receiving messages from the server or would fire some client messages but not others. Refreshing the page solves it but on the first time visit to the page the problem starts again.

After a lot of debugging it appears the problem is caused by placing your SignalR client code after the connect start code in your client. So if you are using a JavaScript client, ensure you place the SignalR connection start code after all of your SignalR client code.


DasBlog logo with arrow to WordPress logo

Migrating from DasBlog to WordPress

The Ripple Rock blogs have been on DasBlog for a long time. It was a great blog engine in its time with a small footprint because it stored all of its data in XML and didn’t need a database. However things have changed and we finally decided to take the plunge and move over to Word Press.

TL;DR? Scroll down to How to do it

The Journey

I had initially thought the migration would be a simple process. There are several tech people out there with blog articles on how they transitioned from DasBlog to WordPress. There were even plugins that would import the XML files that DasBlog uses into WordPress and preserve all the legacy content. However many of those articles were written several years ago and one of the plugins for importing that content from DasBlog no longer works.

I spent ages looking at various solutions that would work and with many I hit brick walls where certain plugins were not supported or just didn’t work as technology had moved on, or the site that hosted them had long since disappeared!

Eventually I realised I couldn’t export directly from DasBlog to WordPress I had to export from DasBlog to a format that was still supported by WordPress and that format was BlogML!

I discovered a DasBlog to BlogML Converter on Merill Fernando’s site. He had made a GUI wrapper for the converter which was originally made by Paul Van Brenk. Unfortunately the link to this converter which was hosted on a Microsoft site no longer worked. However Shital Shah kindly made the application available from his git hub repo found here.

Finally I was able to export my blog from DasBlog to BlogML!

Next I needed a BlogML to WordPress WRX converter.

I discovered a BlogML importer created by Saravana using some of the source code from the legacy blog migrator project which sadly no longer appears to exist anymore. Saravana created this code back in 2012, I then discovered another chap Michael Freidgeim who took the source code and made some improvements to it, such as logging and fixing the importing of comments. You can see the repo he made for it over here.

Michaels code worked like a charm, however on importing a large DasBlog into WordPress I ran into some issues where WordPress kept on repeating the same article over and over again. I wasn’t sure what was to blame here and I spent ages looking on WordPress forum’s about the issue. Several people had encountered this issue but there never really seemed to be a solution to the issue. So I decided to look into the PHP code myself to try and workout what was going on. To be clear, I am not a PHP coder I mainly code in C#.

But what I discovered made perfect sense. My SQL which is what WordPress uses as its database can support some pretty high integer numbers and in theory when people share details about how many articles WordPress can support they post some high numbers. The problem is My SQL can support those high numbers but WordPress was basically taking the post id number from MySQL and converting it to an int. An int in PHP can only support a number no greater than 2147483647. If you try to cast an into any higher than that number PHP will just convert it back to 2147483647 which was the post id of the article I kept on seeing duplicates of.

What had happened was the BlogML importer had kept the GUID’s that DasBlog used for its postid’s when I had imported this into WordPress, it had just attempted to convert these to integers but very high level integers. To get around the issue. I changed the BlogML to WRX code so instead of using the existing post id’s I got it to use a configurable identity seed which you can set yourself. This solved the issue for me. You can access the fork of the repository here which has my changes.

How to do it

Convert to BlogML

Convert your DasBlog to BlogML using the DasBlogML converter. The converter is pretty straight forward. You just need to point it to the root of your dasblog folder and it will do the rest.

Converting from BlogML to WordPress WRX

Convert the BlogML to WordPress WRX format using the converter found here. (Don’t forget to use an identity seed for your postId’s )

Lets unpack a bit of whats happening on the command line here. I have put in my existing blog url and the target url where I am currently setting up my WordPress blog. I am also using the BlogPostIDSeed of 50. On a new WordPress blog this seems to be a safe number to me. If you are using content with an existing blog I’d look in your WordPress database just to be on the safe side. For more details on why I use a BlogPostIDSeed, please see the journey text above.

The above will create

  • [filename].wrx.Redirect.txt – This contains the redirect rules in .htaccess format from your old blog urls to your new so you can keep your SEO traffic. More on this later
  • [filename].wrx.SourceQA.txt – This is a list of source urls that were processed
  • [filename].wrx.TargetQA.txt – This is a list of their corresponding target urls
  • [filename].wrx.xml – This is the file that contains all of your blog articles.

Importing your WRX file into WordPress

Max WordPress File Upload

Before you get to this step, you will probably need to increase the size of the allowable upload size for files to WordPress to do this I made the change in my PHP.INI file. Depending on your hosting provider you’ll probably want to check which method is best for you. There is an article here

Importing

WordPress has an import menu from its tools men where you can select the import feature you want. In this case we are selecting the WordPress Importer

On this screen select the wrx.xml file you created in the previous step.

The importer should work. If you have problems with file sizes you will need to increase the max uploaded file size allowed

Redirects

Redirection

WordPress is going to change the urls of your blog articles. If search engines have your old blog article URL’s indexed, users are going to get 404 errors when visiting them. To prevent this we need to put some redirects in, I made use of a Word Press plugin called Redirection by John Godley. You can install this plugin from the WordPress plugins menu option. Install the plugin

Editing your Redirect Files

In one of our previous steps the BlogML converter creates a files called [filename].wrx.Redirect.txt . This file contains redirects you would usually see in the .htaccess file. if you are happy pasting these into your .htaccess file go ahead now. I wanted to use the redirection plugin so I could keep track of errors or any other redirect issues. However I ended up editing this file to simplify it for me. I wasn’t able to import it as it was for my purposes.

Step 1

I imported the file as a space separated file into Excel and I deleted the columns I didn’t want (see the image) I just wanted the Source URL and the Target URL

Step 2

I made all the URLs relative with a simple search and replace. You can see in the image I have done the first column. I also did this for the second column. I also replaced the .aspx$ to just be .aspx. After this I exported my file as a CSV file.

Step 3

I then imported my CSV file into the Redirect plugin we installed earlier in WordPress

You can now see I have all my redirects imported. All the legacy URL’s will now permanently 301 redirect to their Word Press URLs

Finishing Touches

If like me you made use of plugins to display your code. You may find your code looks a bit odd now.

The above code was formatted in a plugin for Windows Live Writer called Smart Content. The Word Press styles seem to throw this code out a bit, I found I needed to add a bit of CSS to correct that by selecting the Customize option (found at the top left of the page when logged in and on an article page) and then selecting Additional CSS.

If you are late to the migration party like I was, hopefully this article will be helpful to you.


Could not determine storage version; a valid storage connection or a version hint is required–Entity Framework

I’ve posted this more for my own reference so I don’t forget it again. Entity Framework refused to add a migration and I did a lot of things to try and get around the error below

System.Data.Entity.Core.ProviderIncompatibleException: An error occurred accessing the database. This usually means that the connection to the database failed. Check that the connection string is correct and that the appropriate DbContext constructor is being used to specify it or find it in the application’s config file. See http://go.microsoft.com/fwlink/?LinkId=386386 for information on DbContext and connections. See the inner exception for details of the failure. —> System.Data.Entity.Core.ProviderIncompatibleException: The provider did not return a ProviderManifestToken string. —> System.ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.

I eventually discovered the issue was caused by the Start Up project in Visual Studio not being set to the project I was trying to run the Entity Framework migration against. Basically to get around this right click on the project in Visual Studio and select “Set as startup project” then run your migration again and like magic all should work.


Creating integration tests with Entity Framework and MSTest

I have used in memory EF tests, however they do have some limitations. In the past I have also mocked out the Entity Framework part of my tests but this can only take you so far especially if you want some comfort in knowing that your EF statements you put together are working correctly with your code.

So here’s the problem I am solving

  • I want a full integration test of code down to the database
  • I need the ability to reset the database being used each time so my tests are built up from scratch.

How I solved it.

  • I made use of MS SQL Express localdb this is a light weight version of SQL Server targeted towards developers, it only contains the minimal amount of files to start a SQL Server Database instead of needing a full SQL Server instance.
  • A base class used by my Integration test MSTest class.

Test Base Class

The class below is used by my MSTest class. It takes care of disposing of the database if it exists and creating it. It is by no means perfect, if there are better ways, I am open to recommendations. The class is also hardcoded to my db context in this case MyDBContext.

1 using System; 2 using System.Collections.Generic; 3 using System.Data.Entity.Infrastructure; 4 using System.Data.Entity.Migrations; 5 using System.Data.SqlClient; 6 using System.IO; 7 using System.Linq; 8 using System.Text; 9 using System.Threading.Tasks; 10 using Microsoft.VisualStudio.TestTools.UnitTesting; 11 12 namespace My.IntegrationTests 13 { 14 public class TestDatabase 15 { 16 protected static MyDBContext _dbContext; 17 protected static string databaseName = "MyTestDB"; 18 19 20 protected static string databasePath; 21 22 protected static string databaseLogFilePath; 23 24 protected static string dbConnectionString; 25 26 27 public TestDatabase(string databaseNameSet) 28 { 29 databaseName = databaseNameSet; 30 } 31 32 33 public static void SetUpTest(TestContext context) 34 { 35 databasePath = Path.Combine(context.DeploymentDirectory, databaseName + ".mdf"); 36 37 databaseLogFilePath = Path.Combine(context.DeploymentDirectory, databaseName + ".ldf"); 38 39 dbConnectionString = @"server=(localdb)\v11.0;Database=" + databaseName; 40 41 DropSqlDatabase(); 42 CreateSqlDatabase(); 43 44 _dbContext = new MyDBContext(); 45 46 // Basically we are creating a datbase on the fly and we want EF to init the database for us and to update it 47 // with the latest migrations. We do this by enabling automatic migrations first 48 // Then we give it our connection string to our new database we have created for the purpose. 49 DbMigrationsConfiguration configuration = new DbMigrationsConfiguration<MyDBContext>(); 50 configuration.AutomaticMigrationsEnabled = true; 51 configuration.TargetDatabase = new DbConnectionInfo(dbConnectionString,"System.Data.SqlClient"); 52 var migrator = new DbMigrator(configuration); 53 migrator.Update(); 54 55 } 56 57 private static void DropSqlDatabase() 58 { 59 //Note: We do not care if we get a SQL Server exception here as the DB file it is looking for is probably long gone. 60 try 61 { 62 SqlConnection connection = new SqlConnection(@"server=(localdb)\v11.0"); 63 using (connection) 64 { 65 connection.Open(); 66 67 string sql = 68 string.Format( 69 @"alter database [{0}] set single_user with rollback immediate; IF EXISTS(select * from sys.databases where name='{0}') DROP DATABASE {0}", 70 databaseName); 71 72 SqlCommand command = new SqlCommand(sql, connection); 73 command.ExecuteNonQuery(); 74 connection.Close(); 75 } 76 77 } 78 catch (System.Data.SqlClient.SqlException) 79 { 80 // Yeah yeah I know! 81 //throw; 82 } 83 } 84 85 private static void CreateSqlDatabase() 86 { 87 SqlConnection connection = new SqlConnection(@"server =(localdb)\v11.0"); 88 using (connection) 89 { 90 connection.Open(); 91 92 string sql = string.Format(@" 93 CREATE DATABASE 94 [{2}] 95 ON PRIMARY ( 96 NAME=Test_data, 97 FILENAME = '{0}' 98 ) 99 LOG ON ( 100 NAME=Test_log, 101 FILENAME = '{1}' 102 )", 103 databasePath, databaseLogFilePath, databaseName 104 ); 105 106 SqlCommand command = new SqlCommand(sql, connection); 107 command.ExecuteNonQuery(); 108 connection.Close(); 109 } 110 } 111 112 } 113 } 114

The MSTest Class

This is where we do the actual testing. Below I have created a hypothetical test checking to see if userA can get access to UserB’s organisation. In order for the test to work we need to create these organisations in our database first with their various users. When we do this we also make sure that the organisations don’t exist in the database first and if they do we delete them so we can start our test from scratch.

1 using System; 2 using System.Linq; 3 using System.Text; 4 using Microsoft.VisualStudio.TestTools.UnitTesting; 5 using Newtonsoft.Json.Converters; 6 7 namespace My.IntegrationTests 8 { 9 [TestClass] 10 public class SessionSummaryTests : TestDatabase 11 { 12 // We name our test database 13 public SessionSummaryTests() : base("SessionUnitTestDB3") 14 { 15 16 } 17 18 public SessionSummaryTests(string databaseNameSet) : base(databaseNameSet) 19 { 20 } 21 22 [ClassInitialize] 23 public static void SetUp(TestContext context) 24 { 25 SetUpTest(context); 26 } 27 28 /// <summary> 29 /// Hypothetical test case. We test making sure user A cannot get access to User B's organisation 30 /// </summary> 31 [TestMethod] 32 public void CheckIfUserACanAccessUserBsOrganisation() 33 { 34 // **** Start Test Scaffold 35 string userAccountA = "userA"; 36 string userAccountB = "userB"; 37 38 var orgs = _dbContext.Organisation.Where(x => x.OwnerEmail.Equals(userA) || x.OwnerEmail.Equals(userAccountB)); 39 40 41 if (orgs.Any()) 42 { 43 _dbContext.Organisation.RemoveRange(orgs); 44 } 45 46 47 _dbContext.SaveChanges(); 48 49 _dbContext.Organisation.Add(new OrganisationModel() 50 { 51 Name = "The Organisation A", 52 OwnerEmail = userAccount, 53 }); 54 55 56 _dbContext.Organisation.Add(new OrganisationModel() 57 { 58 Name = "The Organisation B", 59 OwnerEmail = outsideUser, 60 61 }); 62 63 64 _dbContext.SaveChanges(); 65 66 var orgA = _dbContext.Organisation.FirstOrDefault(x => x.OwnerEmail.Equals(userAccountA)); 67 68 var orgIdA = org.Id; 69 70 var orgB = _dbContext.Organisation.FirstOrDefault(x => x.OwnerEmail.Equals(userAccountB)); 71 72 var orgIdB = orgB.Id; 73 74 _dbContext.UserDetails.Add(new UserDetailsModel() 75 { 76 Email = userAccountA, 77 FirstName = "User1FirstName", 78 LastName = "User1LastName", 79 OrganisationId = orgIdA 80 } 81 ); 82 83 _dbContext.UserDetails.Add(new UserDetailsModel() 84 { 85 Email = userAccountB, 86 FirstName = "User2 FirstName", 87 LastName = "User2 LastName", 88 OrganisationId = orgIdB 89 } 90 ); 91 92 93 _dbContext.SaveChanges(); 94 95 // *** End of test scafold 96 97 // Our actual Test 98 var result = OrganisationMethods.GrantAccess(orgIdB, userAccountA,_dbContext); 99 100 Assert.AreEqual(false,result); 101 102 103 } 104 105 } 106 } 107

A Few Notes

You may have noticed that if I had used dependency injection I could have mocked the DB implementation if I had encapsulated it. But the purpose of the test was to ensure everything worked correctly right down to the database.

My TestDatabase class ignores an exception (sinful) I have had various issues here especially if the DB does not exist. Which is fine as we don’t want it to exist. But once again I am open to recommendations.


Attaching Selenium screen shots to MSTest results

This is more for my own reference but has proved rather useful. The problem it solves for me is that when automated Selenium tests run and fail its usually quite a task to figure out what went wrong. The best way around this is to take a screen shot of the issue. However taking a screen shot can end up with a folder on a tester server full of images you have to look through to find your test result image. The best option is to attach a screen shot your test takes when it fails to the results of the currently running test.

Below is how to do this with MSTest running in VSTS. All of our Selenium tests run as part of a timed VSTS Release Hub release twice a day.

1 [TestMethod] 2 public void BurnUp_86_CheckIterationPathsLoad() 3 { 4 bool isLoaded = false; 5 6 try 7 { 8 _selenium.ShowSelectedData(); 9 10 _selenium.ClickIterationPath(); 11 isLoaded = _selenium.CheckIterationPathsLoad(); 12 } 13 catch (Exception) 14 { 15 16 var testFilePath = _selenium.ScreenGrab("BurnUp_86_CheckIterationPathsLoadERROR"); 17 18 AttachScreenShotFileToTestResult(testFilePath); 19 20 throw; 21 } 22 23 Assert.IsTrue(isLoaded); 24 } 25 26 public TestContext TestContext { get; set; } 27 28 public void AttachScreenShotFileToTestResult(string screenShotPath) 29 { 30 try 31 { 32 if (!string.IsNullOrEmpty(screenShotPath)) 33 { 34 TestContext.AddResultFile(screenShotPath); 35 } 36 } 37 catch (Exception) 38 { 39 40 //We don't want to stop the tests because we can't attach a file so we let it go....let it go.. let it go... 41 } 42 43 }

Lets take a moment to step through the test method above called BurnUp_86_CheckIterationPathsLoad(). The test is contained in a try catch. All of my selenium functionality I keep inside a separate class so it abstracted and encapsulated from the actual unit tests, this helps greatly with maintaining my test as I only need to focus on the Selenium classes if the page layout for example changes. As part of this class it has a base class where I keep functionality common to all tests such as the ScreenGrab function found inside _selenium (more on this later).

If my test fails my try catch will catch the exception this is where I will take a screen grab of the issue and then allow the original exception to bubble up. But after I have taken a screen grab I attach this to the the current running tests results using the AttachScreenShotFileToTestResult function. You can see inside this function, I don’t care if it fails to attach the screen shot to the test results, I’d rather the rests of the tests continue to run. (I can almost sense the shock from my fellow developers Smile). The key piece of functionality to take away here is TestContext.AddResultFile. This is given the path to where we saved our screen grab in the previous step.

So what about that screen grab functionality?

Selenium has had the ability to take screen shots for a while. Below is the function in my _selenium class that takes the screen shot using the current version of the IWebDriver.

1 public class SeleniumBase 2 { 3 protected IWebDriver driver; 4 5 public string ScreenGrab(string test) 6 { 7 string baseDirectory = "C:\\UITests"; 8 string screenGrabs = Path.Combine(baseDirectory, $"{DateTime.Now:yyyy-MM-dd}"); 9 10 if (!Directory.Exists(baseDirectory)) 11 { 12 Directory.CreateDirectory(baseDirectory); 13 } 14 15 if (!Directory.Exists(screenGrabs)) 16 { 17 Directory.CreateDirectory(screenGrabs); 18 } 19 20 21 //Create these folders if not present 22 string filename = Path.Combine(screenGrabs, $"{test}-{DateTime.Now:yyyy-MM-dd_hh-mm-ss-tt}.png"); 23 24 try 25 { 26 Screenshot ss = ((ITakesScreenshot)driver).GetScreenshot(); 27 ss.SaveAsFile(filename, System.Drawing.Imaging.ImageFormat.Png); 28 29 30 } 31 catch (Exception) 32 { 33 34 //We swallow the exception because we want the tests to coninue anyway. Taking a screen shot was just a nice to have. 35 return string.Empty; 36 } 37 38 39 return filename; 40 } 41 }

So what does the result look like?

Below are the results from one of our automatic test runs that are run for us by Visual Studio Team Services Release Hub. If I click on the test that has failed you can see in the results section an attachment has been added which is the screen grab we took when the test failed.

image

Got a better way of doing the above? Or would like to recommend some changes? Don’t be shy leave a comment, I’d love to hear from you Smile


Migrating from TFS to Visual Studio Team Services notes

Yesterday I migrated one of our TFS collections to VSTS using  Microsoft’s migration guide for moving from TFS to VSTS . I won’t lie, it was a pretty long process and it took a lot of going back and fourth to make sure I fully understood the guide which is a PDF 58 pages long. The guide comes with several checklists and things you need to check and prep before your migrations.

A very rough outline of what happens is that you run a check against your TFS using the tool provided to ensure everything is exportable, if there are problems you go about fixing them following suggestions from the tool and then running the check again until you are ready to go. Next you you will run a prep that will generate some files you will need to map your users across followed by making a database backup as a DACPAC package and entering your import invite codes (provided by Microsoft). These are then uploaded to an Azure storage account and you kick off the migration process which uses these assets to import your data into a brand new VSTS instance.

I won’t go into details about how to do the migration as this is covered in the guide, however I will highlight some things you should take into account before you migrate from TFS to VSTS which is done using a tool provided in the guide called the TFSMigrator.

Azure Active Directory

You are going to have to make sure you have this in place or have at least thought about it. If you use Active Directory in your organisation a good thing to look at is replicating this to Azure, your migration is going to need this. If you are not using Active Directory but just accounts on the box as I did for this migration, you can easily map these across to Azure Active Directory accounts. If you have Office 365, then you already have access to an Azure Active Directory setup (depending on your subscription) and you can make use of this. The reason Azure directory is important, is that this is how VSTS will authenticate your users once you have migrated across to VSTS.

Plan for some downtime to make backups

Even when doing a test migration as I did, you need to plan for some downtime. One of the reasons for this is that you will need to generate a DACPAC project of your TFS Collection. In order to do this you have to take the TFS Collection Offline and then detach it from TFS. If you have not done this before you may be put off by the ominous warnings from the TFS Admin Console asking you to tick a box stating you have made a backup of your TFS databases.

After you have detached your TFS Collection and made a DACPAC of it, you can then reattach your collection so your team can continue working as usual.

Learn what a DACPAC is

Yes I had never used one before. The guide will give you some details with a sample command line to use to create one. Effectively DACPACs are short for Data-tier Application Package. These are generated from SQL Server itself. It is basically a way of exporting your whole TFS Collection database with everything that it needs to be re-created. “tables, views, and instance objects, including logins – associated with a user’s database”. The DACPAC package will be uploaded to an Azure storage blob that the migration tool uses.

Learn about Azure Storage Accounts and SAS

While I have used Azure Storage Accounts before , I found this part quite complicated and it took me a while to get it right. Basically the DACPAC package your create from your TFS Collection database gets uploaded to an Azure Storage account along with a mapping file for user accounts. The hardest part I found was having to workout how to create an SAS token URL to the where I had stored these in an Azure storage account. The guide will provide you with a link to some PowerShell you can sue that will generate this URL for you. I am not sure why Azure couldn’t create this link for you (I did try) but eventually used the PowerShell provided that worked first time.

Azure PowerShell tools

Make sure you have the Azure PowerShell tools installed, you will need these for running some PowerShell to generate an SAS token url to your Azure Storage account (see above).

Final Notes

I would recommend reading the guide fully before getting started. Also note that currently you have to request an import code in order to use the service. You will get two of these, one is for a dry run to ensure it works and the next one is for your production import. This is when you are fully committed and feel confident it all went to plan in the dry run.