I don’t want to get into the whys and wherefores just yet; I’d like to start with the assumption that branches exist in your TFS ALM implementation.
You may not be comfortable with those branches though.
You may feel that too much cost is involved in ‘using’ those branches! The cost of merging, the cost of creating new branches, the cost of updating build definitions and the cost of increased complexity.
Any two branches have a relationship and this relationship has a cost.
The simple reason is that branches split up code. Code that represents a single product. Code that is the product.
Your live service and website is at any point in time derived from a particular singular unique set of code. Same goes for the DVD that has your latest product on it or the product download site.
However, no two ways about this; split code is a copy of code that can then be changed and will then be different from where it was copied from. That difference should only exist for a particular clearly defined purpose. So a purposeful change would be to check in code related to a Task. Hopefully that Task would be related to a Product Backlog Item which has been added to a Sprint. These workitems are in the MS Scrum 2.0/2.1 which is is the default process template in TFS 2012.
Branches serve a purpose; but they need to be thought about very carefully.
A set of code will exist that represents a live product and this live code is usually separated from any changes that will be taking place; such as a piece of development work or a spike of R&D. It seems sensible to keep a copy of that live code safe so that it can be used for any patches that need to be made to the live product to keep it going before the next release.
We could get into the detail of what a main branch is and if a live branch exists or not. But rather than dwell on that too much I’d rather have us say for the moment that there are branches involved and you are merging code between them.
I’m going to list a few topics as points that I think can improve branch quality and throughput efficiency, in future editions of this blog I’ll try take these topics a step further.
Not sure yet which would be a good topic to start on, but I’m open to suggestions!
Some topics are related and so will get bundled together.
Branching topics (In no particular order):
|1 ||Reduce the amount of merging required to the minimum possible; have fewer branches.|
It’s all about speed to market; less steps to get you there will help…
However of course this will depend on the quality of the code that goes to market.
Do less branches impact that quality?
|2 ||Reduce or eliminate complexity of merging; merge often.|
Be ready and prepared to merge, know the branch quality all the time; auto builds, auto deployment, auto testing.
No excuses here.
|3 ||Understand what is being merged. I’ve seen some really good custom approaches to this topic.|
|4 ||Validate the code quality on all branches; aim to use Integration builds. Some sort of buddy or manual build and Nightly is the very minimum. Continuous is best if you can achieve it.|
Use the same builds and same tests on all the branches for the same product. So that the same files, the same binaries, the same installer and the testing is consistent. There can be an issue here around configuration and database environment specific data.
Builds and tests are part of the product code and get merged around with the product.
|5 ||Speed up the builds. Good to know sooner rather than later that a change does not build.|
So look at ways of doing incremental builds. That is incremental Continuous Integration builds.
The ‘Clean Workspace’ option can be set to none via the basic section in a build definition’s process parameters; no code change no build.
Consider using a NUGet enterprise repository to store built versions of components; so that they can be pulled into an integration build and do not have to be rebuilt. http://inedo.com/proget/overview
I know of an organisation that has a very clever bespoke way of understanding what needs to be built via changeset and a component dependency tree. The thing is; if you build solution files you need a way of calculating dependency.
I suspect that NUGet would be a reasonable alternative.
|6 ||Understand what is waiting to be merged and how long that list is. |
Report on this. Heads up of the release.
|7 ||Communicate the ‘purpose’ of a branch; update this all the time.|
Communicate and document any test environments that may be specific to the branch.
Understand, manage and ensure that the correct work is undertaken on branches with a view to reducing or eliminating conflict.
Take work from the Product Backlog.
Check in work against Tasks.
Make use of Sprints.
Iterations. Releases. Teams.
Make sure that development teams are fully aware of the purpose of the existing branches.
|8 ||Divide up the product into components to help scope the merge.|
Components will simplify build definition workspace mapping; reduce the build process get latest overhead. Faster builds.
Have build definitions that are scoped to build the individual component parts of the product.
Product sliced up into components will reduce the likelihood for teams to be working on the same code.
Have a folder structure that is clear and unambiguous about which component lies under a folder.
Contain all code and artefacts that relate to building a particular component under that folder.
Consider putting build definitions into version control.
|9 ||Having the product divided up into components does not mean that you stop feature based development.|
Features involve creating new components or updating and changing existing components.
|10 ||Facilitate visibility of work on a particular component that is taking place on different branches.|
Clarify the merge overhead by helping provide a component based view of the merge prior to release and merge of updated product.
|11 ||Refactoring branches. |
Do this actively all the time.
Prune and cut back irrelevant branches.
Update the branching documentation.
|12 ||Keep development branches absolutely up to date with live product code as it is delivered. |
So using the familiar RI and FI paradigm in whatever guise that takes.
Prevent development branches merging into main if they are out of sync with the latest RIs into main.
|13 ||If development takes place on child of main; merge into the main branch should overwrite. |
The resolution of conflict should be dealt with on the child branch first by integrating changes from the main branch; through FI.
The result is no conflict when merging into the parent main branch; through RI.
|14 ||Do not allow branches to exist for too long. |
Once they have achieved their purpose get rid of them.
Re- branch for new work.
Main is an exception and will exist for the lifetime of the product. This would be the same for development taking place on main.
|15 ||Consider using main branch only for development work; rather than development work on children of main branch.|
This may sound rather extreme but it is a very agile way of working.
Release branches would be taken from main; the hope would be to avoid having to make any significant changes on that branch.
|16 ||Consider Team Projects as a way of encapsulating build definitions for a particular branch. |
Do you have a very long rather messy list of build definitions?
That long list incidentally may not be version controlled; unless you have put together some custom code to handle it.
Yes, Team Projects can sit over the branch structure!
Useful way of controlling access to a branch; per team. So Team Project contributors only.
|17 ||Speeding up the availability of code for testing by way of automated deployment should be an objective. |
To achieve this there needs to be a way of defining which server types components are installed onto.
There needs to be a definition of test environments and what server types exist in those environments.
Contained in these definitions in combination with an understanding of dependency, services and app pools can be automatically stopped and started.
Quite often organisations write there own deployment processes and systems.
There are products available to help out with automated deployment such as Nolio: http://www.noliosoft.com/ and InRelease : http://www.incyclesoftware.com/inrelease/
|18 ||It should be an objective to build once and install that same packaged build all the way to the live deployment; be that a website or DVD. NUGet could help here too.|
|19 ||Automatically version stamp assembly.info and installer packages where used so that version and build info can be seen in the file info properties and add remove programs. It’s great to be able to look in add remove programs to see what is installed. This is a simple and well documented modification to the TFS Build workflow.|
|20 ||Last; but has the clear honour of being number 20 are database deployments. |
Yes the database schema needs to be versioned and it needs to be automatically deployed.
Some write custom code and scripts to do this; others use tools in combination with scripts.
It is often a very complex area; but will be a showstopper unless fully addressed.
This has to work seamlessly with code component deployments. There needs to be an understanding of dependency of code component on database ‘components’.
Changes to schema and objects such as stored procedures need to be incorporated into a live database baseline that can then be used to build a fully functioning version of live. Changes that are not yet live are then ‘appended’ onto the baseline for testing.