Note that this is a repost of my LinkedIn article of the same name
The changing face of DevOps
Over the years the style of DevOps consultancy I have done has changed, along with its name. Changing from the simple ‘source control’, to SDLC, then ALM to now DevOps.
Back in the days of Team Foundation Server (TFS), I used to do a lot of on-premise installs and upgrades. Getting our customers up and running, and helping them improve their adoption of the tools.
With the move to cloud services the profile of this work changed to increasingly helping customers migrate their tooling and processes to the cloud. Specifically doing TFS/Azure DevOps Server to Azure DevOps Services cloud migrations, and then helping them get the best out of this new platform.
Today the profile has changing again. The majority of our clients have already moved to the cloud, so much of my engagements are focused on best practice usage of the tools, and the day to day development lifecycle. As opposed to installation, upgrade and migration.
That does not mean I don’t see installation, upgrade and migration projects in both the Azure DevOps and GitHub space, but certainly not in the volumes I used to.
Who is left on premises and why?
Now we seem to be past the peak of cloud migrations, most clients who want to make the move have moved. Remaining on-premise we now have two broad groups:
- Companies with a governance reason, or maybe just a perceived reason, to stay on-premises. It is worth noting that I do see clients who assume a governance restriction, but don’t check with their regulatory/auditor body for current guidance to see what their options are. They are often surprised that the cloud, when used securely, is a viable option for them.
- Companies who have neglected their on-premise server
It is the latter I want to talk about in this article.
Falling into the gap between IT and Development
I have often seen that it is easy for a TFS/Azure DevOps Server to fall into a gap between the IT and Development teams. Wrongly, the server is not considered a business-critical server to be managed by IT, and not treated with rigor by the Development team as it would be by an IT team.
Because of this, I still see a number of old TFS/Azure DevOps Servers that could well have not been patched since their installation, maybe over a decade ago. The only reason they are being looked at now is the underlying Windows Operating Systems and SQL versions are reaching end of life and company wide IT audits are forcing changes to be made.
I have long said that for the on-premises TFS/Azure DevOps Server you should, on top of the usual OS patching, be looking at patching DevOps at least every 3 to 6 months. This is because this is the rough release cadence that features from the cloud version of Azure DevOps Services get packaged and made available to the on-premise version. These updates include not just new features, but also critical security patches.
It is common that in customers who have not patched their server that they have also not kept their development practices up to date. Often lacking CI/CD processes, which I feel are the core of modern DevOps good practice, the heartbeat of the process, and are commonly still using older source control patterns e.g. still on TFVC source control. This is not in itself wrong, but potentially limiting as modern practices and the skillsets of newer/younger developers are going to be based around the de-facto standard of Git.
But my upgrade is too complex
The issue with upgrades for such old systems is that there is often no direct path to the current version of Azure DevOps Server. Upgrades may require multiple steps, using temporary intermediate environments to address the complexities due to the limitations of supported SQL and OS versions that underpin TFS/Azure DevOps Server.
A common question from clients is ‘how long is this upgrade going to take’? A very difficult one to answer as everyone’s systems are different. You can think of the time required for this upgrade as taking the time for all the smaller updates you should have done over the life of your server and bundling it up into one massive job.
The majority of the time in a TFS upgrade is related to SQL operations, backups, copies, restores and schema/data updates. The sheer volume of data to be moved is usually the limiting factor. Anything involving potentially Terabytes of data takes time.
For major upgrades such as this, I always recommend a dry run to make sure the process is understood, timings known and to give the client’s staff the opportunity for training and adoption of the new tooling available, such as Git and modern CI/CD.
But you can’t get away from the fact that you will probably need an amount of downtime, while your old TFS server is not available, before the new Azure DevOps Server is ready.
But can’t I jump to the cloud now?
Once I start this form of major upgrade engagement it is common for clients to ask “can’t I just bypass all this and go to the cloud?”. Of course the answer is ‘yes, but…’.
If they wish to do a full fidelity migration to Azure DevOps Services then they still have to get to the current version of Azure DevOps Server as the migration start point. So they cannot avoid the on-premise upgrades.
If they don’t need a full fidelity upgrade, or are considering a move to another toolset such as GitHub Enterprise, there are more options that do not require an update to the on-premise server, but these come with constraints and compromises, as you would expect whenever you swap toolsets.
So are you in this position?
The moral of the story is don’t neglect your DevOps tools, a craftsman does not let their tools get rusty and neither should you.
If you are not keeping your DevOps tool chain up to date you are hampering the efficiency and potentially the recruitment/retention of your developers as they look to work on projects using modern tooling.
So, if this sounds like your situation, why not get in touch with me or Black Marble to discuss your DevOps options?